title
stringlengths 3
82
| text
stringlengths 621
92.1k
| relevans
float64 0.76
0.83
| popularity
float64 0.93
1
| ranking
float64 0.75
0.83
|
---|---|---|---|---|
Domain-general learning | Domain-general learning theories of development suggest that humans are born with mechanisms in the brain that exist to support and guide learning on a broad level, regardless of the type of information being learned. Domain-general learning theories also recognize that although learning different types of new information may be processed in the same way and in the same areas of the brain, different domains also function interdependently. Because these generalized domains work together, skills developed from one learned activity may translate into benefits with skills not yet learned. Another facet of domain-general learning theories is that knowledge within domains is cumulative, and builds under these domains over time to contribute to our greater knowledge structure. Psychologists whose theories align with domain-general framework include developmental psychologist Jean Piaget, who theorized that people develop a global knowledge structure which contains cohesive, whole knowledge internalized from experience, and psychologist Charles Spearman, whose work led to a theory on the existence of a single factor accounting for all general cognitive ability.
Domain-general learning theories are in direct opposition to domain-specific learning theories, also sometimes called theories of Modularity. Domain-specific learning theories posit that humans learn different types of information differently, and have distinctions within the brain for many of these domains. Domain-specific learning theorists also assert that these neural domains are independent, purposed solely for the acquisition of one skill (i.e. facial recognition or mathematics), and may not provide direct benefits in the learning of other, unrelated skills.
Related Theories
Piaget’s Theory of Cognitive Development
Developmental psychologist, Jean Piaget, theorized that one's cognitive ability, or intelligence – defined as the ability to adapt to all aspects of reality – evolves through a series of four qualitatively distinct stages (the sensorimotor, pre-operational, concrete operational and formal operational stages). Piaget's theory describes three core cognitive processes that serve as mechanisms for transitioning from one stage to the next.
Piaget's core processes for developmental change:
Assimilation: The process of transforming new information so that it fits with ones' existing way of thinking.
Accommodation: The process of adapting ones' thinking to account for new experiences.
Equilibration: The process by which one integrates their knowledge about the world into one unified whole.
However, these processes are not the only processes responsible for progressing through Piaget's developmental stages. Each stage is differentiated based upon the types of conceptual content that can be mastered within it. Piaget's theory holds that transitioning from one stage of development to the next is not only a result of assimilation, accommodation, and equilibration, but also a result of developmental changes in domain-general mechanisms. As humans mature, various domain-general mechanisms become more sophisticated, and thus, according to Piaget, allow for growth in cognitive functioning.
For example, Piaget's theory notes that the humans transition into the concrete operation stage of cognitive development when they acquire the ability to take perspective, and no longer have egocentric thinking (a characteristic of the pre-operational stage). This change can be viewed as the result of developmental changes in information processing capacity. Information processing is a mechanism that is used across many different domains of cognitive functioning, and thus can be seen as a domain-general mechanism.
Psychometric Theories of Intelligence
Psychometric analysis of measurements of human cognitive abilities (intelligence) may suggest that there is a single underlying mechanism that impacts how humans learn. In the early 20th century, Charles Spearman noticed that children's scores on different measures of cognitive abilities were positively correlated. Spearman believed that these correlations could be attributed to a general mental ability or process that is utilized across all cognitive tasks. Spearman labeled this general mental ability as the g factor, and believed g could represent an individual's overall cognitive functioning. The presence of this g factor across different cognitive measures is well-established and uncontroversial in statistical research. It may be that this g factor highlights domain-general learning (cognitive mechanisms involved in all cognition), and that this general learning accounts for the positive correlations across seemingly different cognitive tasks. It is important to note, however, there currently is no consensus to what causes the positive correlations.
Spearman's work was expanded upon by Raymond B. Cattell, who broke g into two broad abilities: fluid intelligence (Gf) and crystallized intelligence (Gc). Cattell's student, John Horn, added additional broad abilities to Cattell's model of intelligence. In 1993, John B. Carroll added more specificity to Cattell and Horn's Gf-Gc model by adding a third layer of human intelligence factors. Carroll named these factors “narrow abilities”. Narrow abilities are described as abilities that do not correlate with skills outside their domain, following more along the lines of domain-specific learning theories.
Despite breaking g into more specific areas, or domains of intelligence, Carroll maintained that a single general ability was essential to intelligence theories. This suggests that Carroll, to some extent, believed cognitive abilities were domain-general.
Skills That May Be Acquired via Domain-General Mechanisms
As discussed above, Piaget's theory of cognitive development posits that periods of major developmental change come about due to maturation of domain-general cognitive mechanisms. However, although Piaget's theory of cognitive development can be credited with establishing the field of cognitive development, some aspects of his theory have not withstood the test of time.
Despite this, researchers that call themselves "neo-Piagetians" have often focused on the role domain-general cognitive processes in constraining cognitive development. It had been found that many skills humans acquire require domain-general mechanisms rather than highly specialized cognitive mechanisms for development. Namely, memory, executive functioning, and language development.
Memory
One theory of memory development suggests that basic (domain-general) memory processes become more superior through maturation. In this theory, basic memory processes are frequently used, rapidly executed memory activities. These activities include: association, generalization, recognition, and recall. The basic processes theory of memory development states that these memory processes underlie all cognition, as it holds that all more complex cognitive activities are built by combining these basic processes in different ways. Thus, these memory basic processes can be seen as domain-general processes that can be applied across various domains.
Domain general processes in memory development:
Association is the most basic memory process. The ability to associate stimuli with responses is present from birth.
Generalization is the tendency to respond in the same way to different but similar stimuli
Recognition describes a cognitive process that matches information from a stimulus with information retrieved from memory
Recall is the mental process of retrieval of information from the past
In addition to these general processes, working Memory in particular has been extensively studied as it relates and functions as a domain-general mechanism to constraints on cognitive development. For example, researchers believe that with maturation, one is able to hold more complex structures in their working memory, which results in an increase of possible computations that underlie inference and learning. Thus, working memory can be viewed as a domain-general mechanism that aids development across many different domains.
Executive Functions
Researchers have expanded the search for domain-general mechanisms that underlie cognitive development beyond working memory. The advancement in cognitive neuroscience technology is credited as making this expansion possible. Within the last decade, researchers have begun to focus on a group of cognitive mechanisms, collectively named Executive Functions. Mechanisms commonly labeled executive functions include: working memory, inhibition, set shifting, as well as higher-order mechanisms that involve combinations of the prior (planning, problem-solving, reasoning).
Piagetian tasks – tasks that measure behaviors that relate to cognitive abilities associated with Piaget's developmental stages – have been used in studies of cognitive neuroscience to investigate whether executive functions relate to cognitive development. Such studies revealed that the maturation of the prefrontal cortex (an area of the brain identified to underlie the development of executive functions such as working memory and inhibition) may relate to success on tasks that measure the Piagetian concept of object permanence. Thus, this research supports Piaget's notion that developmental changes in domain-general mechanisms promote cognitive development.
Language
The general cognitive processes perspective of language development emphasizes characteristics of the language learner as the source of development. The general cognitive processes perspective states that the broad cognitive processes are sufficient for a child to learn new words. These broad cognitive processes include: attending, perceiving, and remembering. Important to this perspective is the idea that such cognitive processes are domain-general, and are applied to learning many different kinds of information in addition to benefiting word acquisition. This perspective contrasts the grammatical cues perspective, which emphasizes characteristics of the language input as a source of development. Furthermore, the general cognitive processes perspective also contrasts the constraints perspective of language development, in which children are said to be able to learn many words quickly because of constraints that are specialized for language learning.
Opposing Theories
The relationship between domain general learning and domain specific learning (also known as the modularity debate or modularity of mind) has been an ongoing debate for evolutionary psychologists.
The modularity of mind or modularity debate states that the brain is constructed of neural structures (or modules) which have distinct functions. Jerry Fodor, an American philosopher and cognitive scientist, stated in his 1983 book that brain modules are specialized and may only operate on certain kinds of inputs. According to Fodor, a module is defined as “functionally specialized cognitive systems”. These modules are said to be mostly independent, develop on different timetables, and are influenced by a variety of different experiences an individual may have. Some argue that Piaget's domain general theory of learning undermines the influence of socio-cultural factors on an individual's development. More specifically, the theory does not explain the influence of parental nurture and social interactions on human development.
Domain-specific learning is a theory in developmental psychology that says the development of one set of skills is independent from the development of other types of skills. This theory suggests that training or practice in one area may not influence another. Domain-specificity has been defined by Frankenhuis and Ploeger as that “a given cognitive mechanism accepts, or is specialized to operate on, only a specific class of information”. Furthermore, domain-specific learning prescribes different learning activities for students in order to meet required learning outcomes.
Modern cognitive psychologists suggest a more complex relationship between domain-generality and domain-specificity in the brain. Current research suggests these networks may exist together in the brain, and the extent to which they function in tandem may vary by task and skill-level.
Possible Applications
Workplaces
Technology advancements and changes in the labor market show the need for workers/employees to be adaptive. This may suggest that school curricular should incorporate activities focusing on developing the necessary skills for dynamic environments. People tend to use domain-general learning processes when initially learning how to perform and complete certain tasks, and less so once these tasks become extensively practiced.
Early Childhood Education
Problem solving is considered to be an individual's ability to partake in cognitive processing in order to understand and solve problems where a solution may not be immediately apparent. Domain-specific problem solving skills may provide students with narrow knowledge and abilities. Because of this, school teachers, policy makers and curriculum developers may find it beneficial to incorporate domain general skills (such as time management, teamwork or leadership) in relation to problem solving into school curriculum. Domain general problem solving provides students with cross-curricular skills and strategies that can be transferred to multiple different situations/environments/domains. Examples of cross-curricular skills include, but are not limited to: information processing, self-regulation and decision making.
Language Development
Additionally, linguistic knowledge and language development are examples of domain-general skills. Infants can learn rules and identify patterns in stimuli which may imply learning and generalizable knowledge. This means parents of young children and early childhood educators may want to consider its application while supporting language development.
See also
Cognition
Epistemology
Instructional theory
Learning
Learning theory (education)
Neuroscience
Modularity of mind
Constructivism
Neuroconstructivism
Piaget's theory of cognitive development
Poverty of the stimulus
References
Developmental psychology | 0.789345 | 0.964984 | 0.761706 |
Data and information visualization | Data and information visualization (data viz/vis or info viz/vis) is the practice of designing and creating easy-to-communicate and easy-to-understand graphic or visual representations of a large amount of complex quantitative and qualitative data and information with the help of static, dynamic or interactive visual items. Typically based on data and information collected from a certain domain of expertise, these visualizations are intended for a broader audience to help them visually explore and discover, quickly understand, interpret and gain important insights into otherwise difficult-to-identify structures, relationships, correlations, local and global patterns, trends, variations, constancy, clusters, outliers and unusual groupings within data (exploratory visualization). When intended for the general public (mass communication) to convey a concise version of known, specific information in a clear and engaging manner (presentational or explanatory visualization), it is typically called information graphics.
Data visualization is concerned with visually presenting sets of primarily quantitative raw data in a schematic form. The visual formats used in data visualization include tables, charts and graphs (e.g. pie charts, bar charts, line charts, area charts, cone charts, pyramid charts, donut charts, histograms, spectrograms, cohort charts, waterfall charts, funnel charts, bullet graphs, etc.), diagrams, plots (e.g. scatter plots, distribution plots, box-and-whisker plots), geospatial maps (such as proportional symbol maps, choropleth maps, isopleth maps and heat maps), figures, correlation matrices, percentage gauges, etc., which sometimes can be combined in a dashboard.
Information visualization, on the other hand, deals with multiple, large-scale and complicated datasets which contain quantitative (numerical) data as well as qualitative (non-numerical, i.e. verbal or graphical) and primarily abstract information and its goal is to add value to raw data, improve the viewers' comprehension, reinforce their cognition and help them derive insights and make decisions as they navigate and interact with the computer-supported graphical display. Visual tools used in information visualization include maps (such as tree maps), animations, infographics, Sankey diagrams, flow charts, network diagrams, semantic networks, entity-relationship diagrams, venn diagrams, timelines, mind maps, etc.
Emerging technologies like virtual, augmented and mixed reality have the potential to make information visualization more immersive, intuitive, interactive and easily manipulable and thus enhance the user's visual perception and cognition. In data and information visualization, the goal is to graphically present and explore abstract, non-physical and non-spatial data collected from databases, information systems, file systems, documents, business and financial data, etc. (presentational and exploratory visualization) which is different from the field of scientific visualization, where the goal is to render realistic images based on physical and spatial scientific data to confirm or reject hypotheses (confirmatory visualization).
Effective data visualization is properly sourced, contextualized, simple and uncluttered. The underlying data is accurate and up-to-date to make sure that insights are reliable. Graphical items are well-chosen for the given datasets and aesthetically appealing, with shapes, colors and other visual elements used deliberately in a meaningful and non-distracting manner. The visuals are accompanied by supporting texts (labels and titles). These verbal and graphical components complement each other to ensure clear, quick and memorable understanding. Effective information visualization is aware of the needs and concerns and the level of expertise of the target audience, deliberately guiding them to the intended conclusion. Such effective visualization can be used not only for conveying specialized, complex, big data-driven ideas to a wider group of non-technical audience in a visually appealing, engaging and accessible manner, but also to domain experts and executives for making decisions, monitoring performance, generating new ideas and stimulating research. In addition, data scientists, data analysts and data mining specialists use data visualization to check the quality of data, find errors, unusual gaps and missing values in data, clean data, explore the structures and features of data and assess outputs of data-driven models. In business, data and information visualization can constitute a part of data storytelling, where they are paired with a coherent narrative structure or storyline to contextualize the analyzed data and communicate the insights gained from analyzing the data clearly and memorably with the goal of convincing the audience into making a decision or taking an action in order to create business value. This can be contrasted with the field of statistical graphics, where complex statistical data are communicated graphically in an accurate and precise manner among researchers and analysts with statistical expertise to help them perform exploratory data analysis or to convey the results of such analyses, where visual appeal, capturing attention to a certain issue and storytelling are not as important.
The field of data and information visualization is of interdisciplinary nature as it incorporates principles found in the disciplines of descriptive statistics (as early as the 18th century), visual communication, graphic design, cognitive science and, more recently, interactive computer graphics and human-computer interaction. Since effective visualization requires design skills, statistical skills and computing skills, it is argued by authors such as Gershon and Page that it is both an art and a science. The neighboring field of visual analytics marries statistical data analysis, data and information visualization and human analytical reasoning through interactive visual interfaces to help human users reach conclusions, gain actionable insights and make informed decisions which are otherwise difficult for computers to do.
Research into how people read and misread various types of visualizations is helping to determine what types and features of visualizations are most understandable and effective in conveying information. On the other hand, unintentionally poor or intentionally misleading and deceptive visualizations (misinformative visualization) can function as powerful tools which disseminate misinformation, manipulate public perception and divert public opinion toward a certain agenda. Thus data visualization literacy has become an important component of data and information literacy in the information age akin to the roles played by textual, mathematical and visual literacy in the past.
Overview
The field of data and information visualization has emerged "from research in human–computer interaction, computer science, graphics, visual design, psychology, and business methods. It is increasingly applied as a critical component in scientific research, digital libraries, data mining, financial data analysis, market studies, manufacturing production control, and drug discovery".
Data and information visualization presumes that "visual representations and interaction techniques take advantage of the human eye's broad bandwidth pathway into the mind to allow users to see, explore, and understand large amounts of information at once. Information visualization focused on the creation of approaches for conveying abstract information in intuitive ways."
Data analysis is an indispensable part of all applied research and problem solving in industry. The most fundamental data analysis approaches are visualization (histograms, scatter plots, surface plots, tree maps, parallel coordinate plots, etc.), statistics (hypothesis test, regression, PCA, etc.), data mining (association mining, etc.), and machine learning methods (clustering, classification, decision trees, etc.). Among these approaches, information visualization, or visual data analysis, is the most reliant on the cognitive skills of human analysts, and allows the discovery of unstructured actionable insights that are limited only by human imagination and creativity. The analyst does not have to learn any sophisticated methods to be able to interpret the visualizations of the data. Information visualization is also a hypothesis generation scheme, which can be, and is typically followed by more analytical or formal analysis, such as statistical hypothesis testing.
To communicate information clearly and efficiently, data visualization uses statistical graphics, plots, information graphics and other tools. Numerical data may be encoded using dots, lines, or bars, to visually communicate a quantitative message. Effective visualization helps users analyze and reason about data and evidence. It makes complex data more accessible, understandable, and usable, but can also be reductive. Users may have particular analytical tasks, such as making comparisons or understanding causality, and the design principle of the graphic (i.e., showing comparisons or showing causality) follows the task. Tables are generally used where users will look up a specific measurement, while charts of various types are used to show patterns or relationships in the data for one or more variables.
Data visualization refers to the techniques used to communicate data or information by encoding it as visual objects (e.g., points, lines, or bars) contained in graphics. The goal is to communicate information clearly and efficiently to users. It is one of the steps in data analysis or data science. According to Vitaly Friedman (2008) the "main goal of data visualization is to communicate information clearly and effectively through graphical means. It doesn't mean that data visualization needs to look boring to be functional or extremely sophisticated to look beautiful. To convey ideas effectively, both aesthetic form and functionality need to go hand in hand, providing insights into a rather sparse and complex data set by communicating its key aspects in a more intuitive way. Yet designers often fail to achieve a balance between form and function, creating gorgeous data visualizations which fail to serve their main purpose — to communicate information".
Indeed, Fernanda Viegas and Martin M. Wattenberg suggested that an ideal visualization should not only communicate clearly, but stimulate viewer engagement and attention.
Data visualization is closely related to information graphics, information visualization, scientific visualization, exploratory data analysis and statistical graphics. In the new millennium, data visualization has become an active area of research, teaching and development. According to Post et al. (2002), it has united scientific and information visualization.
In the commercial environment data visualization is often referred to as dashboards. Infographics are another very common form of data visualization.
Principles
Characteristics of effective graphical displays
Edward Tufte has explained that users of information displays are executing particular analytical tasks such as making comparisons. The design principle of the information graphic should support the analytical task. As William Cleveland and Robert McGill show, different graphical elements accomplish this more or less effectively. For example, dot plots and bar charts outperform pie charts.
In his 1983 book The Visual Display of Quantitative Information, Edward Tufte defines 'graphical displays' and principles for effective graphical display in the following passage:
"Excellence in statistical graphics consists of complex ideas communicated with clarity, precision, and efficiency. Graphical displays should:
show the data
induce the viewer to think about the substance rather than about methodology, graphic design, the technology of graphic production, or something else
avoid distorting what the data has to say
present many numbers in a small space
make large data sets coherent
encourage the eye to compare different pieces of data
reveal the data at several levels of detail, from a broad overview to the fine structure
serve a reasonably clear purpose: description, exploration, tabulation, or decoration
be closely integrated with the statistical and verbal descriptions of a data set.
Graphics reveal data. Indeed, graphics can be more precise and revealing than conventional statistical computations."
For example, the Minard diagram shows the losses suffered by Napoleon's army in the 1812–1813 period. Six variables are plotted: the size of the army, its location on a two-dimensional surface (x and y), time, the direction of movement, and temperature. The line width illustrates a comparison (size of the army at points in time), while the temperature axis suggests a cause of the change in army size. This multivariate display on a two-dimensional surface tells a story that can be grasped immediately while identifying the source data to build credibility. Tufte wrote in 1983 that: "It may well be the best statistical graphic ever drawn."
Not applying these principles may result in misleading graphs, distorting the message, or supporting an erroneous conclusion. According to Tufte, chartjunk refers to the extraneous interior decoration of the graphic that does not enhance the message or gratuitous three-dimensional or perspective effects. Needlessly separating the explanatory key from the image itself, requiring the eye to travel back and forth from the image to the key, is a form of "administrative debris." The ratio of "data to ink" should be maximized, erasing non-data ink where feasible.
The Congressional Budget Office summarized several best practices for graphical displays in a June 2014 presentation. These included: a) Knowing your audience; b) Designing graphics that can stand alone outside the report's context; and c) Designing graphics that communicate the key messages in the report.
Quantitative messages
Author Stephen Few described eight types of quantitative messages that users may attempt to understand or communicate from a set of data and the associated graphs used to help communicate the message:
Time-series: A single variable is captured over a period of time, such as the unemployment rate or temperature measures over a 10-year period. A line chart may be used to demonstrate the trend over time.
Ranking: Categorical subdivisions are ranked in ascending or descending order, such as a ranking of sales performance (the measure) by sales persons (the category, with each sales person a categorical subdivision) during a single period. A bar chart may be used to show the comparison across the sales persons.
Part-to-whole: Categorical subdivisions are measured as a ratio to the whole (i.e., a percentage out of 100%). A pie chart or bar chart can show the comparison of ratios, such as the market share represented by competitors in a market.
Deviation: Categorical subdivisions are compared against a reference, such as a comparison of actual vs. budget expenses for several departments of a business for a given time period. A bar chart can show comparison of the actual versus the reference amount.
Frequency distribution: Shows the number of observations of a particular variable for given interval, such as the number of years in which the stock market return is between intervals such as 0–10%, 11–20%, etc. A histogram, a type of bar chart, may be used for this analysis. A boxplot helps visualize key statistics about the distribution, such as median, quartiles, outliers, etc.
Correlation: Comparison between observations represented by two variables (X,Y) to determine if they tend to move in the same or opposite directions. For example, plotting unemployment (X) and inflation (Y) for a sample of months. A scatter plot is typically used for this message.
Nominal comparison: Comparing categorical subdivisions in no particular order, such as the sales volume by product code. A bar chart may be used for this comparison.
Geographic or geospatial: Comparison of a variable across a map or layout, such as the unemployment rate by state or the number of persons on the various floors of a building. A cartogram is a typical graphic used.
Analysts reviewing a set of data may consider whether some or all of the messages and graphic types above are applicable to their task and audience. The process of trial and error to identify meaningful relationships and messages in the data is part of exploratory data analysis.
Visual perception and data visualization
A human can distinguish differences in line length, shape, orientation, distances, and color (hue) readily without significant processing effort; these are referred to as "pre-attentive attributes". For example, it may require significant time and effort ("attentive processing") to identify the number of times the digit "5" appears in a series of numbers; but if that digit is different in size, orientation, or color, instances of the digit can be noted quickly through pre-attentive processing.
Compelling graphics take advantage of pre-attentive processing and attributes and the relative strength of these attributes. For example, since humans can more easily process differences in line length than surface area, it may be more effective to use a bar chart (which takes advantage of line length to show comparison) rather than pie charts (which use surface area to show comparison).
Human perception/cognition and data visualization
Almost all data visualizations are created for human consumption. Knowledge of human perception and cognition is necessary when designing intuitive visualizations. Cognition refers to processes in human beings like perception, attention, learning, memory, thought, concept formation, reading, and problem solving. Human visual processing is efficient in detecting changes and making comparisons between quantities, sizes, shapes and variations in lightness. When properties of symbolic data are mapped to visual properties, humans can browse through large amounts of data efficiently. It is estimated that 2/3 of the brain's neurons can be involved in visual processing. Proper visualization provides a different approach to show potential connections, relationships, etc. which are not as obvious in non-visualized quantitative data. Visualization can become a means of data exploration.
Studies have shown individuals used on average 19% less cognitive resources, and 4.5% better able to recall details when comparing data visualization with text.
History
The modern study of visualization started with computer graphics, which "has from its beginning been used to study scientific problems. However, in its early days the lack of graphics power often limited its usefulness. The recent emphasis on visualization started in 1987 with the special issue of Computer Graphics on Visualization in Scientific Computing. Since then there have been several conferences and workshops, co-sponsored by the IEEE Computer Society and ACM SIGGRAPH". They have been devoted to the general topics of data visualization, information visualization and scientific visualization, and more specific areas such as volume visualization.
In 1786, William Playfair published the first presentation graphics.
There is no comprehensive 'history' of data visualization. There are no accounts that span the entire development of visual thinking and the visual representation of data, and which collate the contributions of disparate disciplines. Michael Friendly and Daniel J Denis of York University are engaged in a project that attempts to provide a comprehensive history of visualization. Contrary to general belief, data visualization is not a modern development. Since prehistory, stellar data, or information such as location of stars were visualized on the walls of caves (such as those found in Lascaux Cave in Southern France) since the Pleistocene era. Physical artefacts such as Mesopotamian clay tokens (5500 BC), Inca quipus (2600 BC) and Marshall Islands stick charts (n.d.) can also be considered as visualizing quantitative information.
The first documented data visualization can be tracked back to 1160 B.C. with Turin Papyrus Map which accurately illustrates the distribution of geological resources and provides information about quarrying of those resources. Such maps can be categorized as thematic cartography, which is a type of data visualization that presents and communicates specific data and information through a geographical illustration designed to show a particular theme connected with a specific geographic area. Earliest documented forms of data visualization were various thematic maps from different cultures and ideograms and hieroglyphs that provided and allowed interpretation of information illustrated. For example, Linear B tablets of Mycenae provided a visualization of information regarding Late Bronze Age era trades in the Mediterranean. The idea of coordinates was used by ancient Egyptian surveyors in laying out towns, earthly and heavenly positions were located by something akin to latitude and longitude at least by 200 BC, and the map projection of a spherical Earth into latitude and longitude by Claudius Ptolemy [–] in Alexandria would serve as reference standards until the 14th century.
The invention of paper and parchment allowed further development of visualizations throughout history. Figure shows a graph from the 10th or possibly 11th century that is intended to be an illustration of the planetary movement, used in an appendix of a textbook in monastery schools. The graph apparently was meant to represent a plot of the inclinations of the planetary orbits as a function of the time. For this purpose, the zone of the zodiac was represented on a plane with a horizontal line divided into thirty parts as the time or longitudinal axis. The vertical axis designates the width of the zodiac. The horizontal scale appears to have been chosen for each planet individually for the periods cannot be reconciled. The accompanying text refers only to the amplitudes. The curves are apparently not related in time.
By the 16th century, techniques and instruments for precise observation and measurement of physical quantities, and geographic and celestial position were well-developed (for example, a "wall quadrant" constructed by Tycho Brahe [1546–1601], covering an entire wall in his observatory). Particularly important were the development of triangulation and other methods to determine mapping locations accurately. Very early, the measure of time led scholars to develop innovative way of visualizing the data (e.g. Lorenz Codomann in 1596, Johannes Temporarius in 1596).
French philosopher and mathematician René Descartes and Pierre de Fermat developed analytic geometry and two-dimensional coordinate system which heavily influenced the practical methods of displaying and calculating values. Fermat and Blaise Pascal's work on statistics and probability theory laid the groundwork for what we now conceptualize as data. According to the Interaction Design Foundation, these developments allowed and helped William Playfair, who saw potential for graphical communication of quantitative data, to generate and develop graphical methods of statistics. In the second half of the 20th century, Jacques Bertin used quantitative graphs to represent information "intuitively, clearly, accurately, and efficiently".
John Tukey and Edward Tufte pushed the bounds of data visualization; Tukey with his new statistical approach of exploratory data analysis and Tufte with his book "The Visual Display of Quantitative Information" paved the way for refining data visualization techniques for more than statisticians. With the progression of technology came the progression of data visualization; starting with hand-drawn visualizations and evolving into more technical applications – including interactive designs leading to software visualization.
Programs like SAS, SOFA, R, Minitab, Cornerstone and more allow for data visualization in the field of statistics. Other data visualization applications, more focused and unique to individuals, programming languages such as D3, Python and JavaScript help to make the visualization of quantitative data a possibility. Private schools have also developed programs to meet the demand for learning data visualization and associated programming libraries, including free programs like The Data Incubator or paid programs like General Assembly.
Beginning with the symposium "Data to Discovery" in 2013, ArtCenter College of Design, Caltech and JPL in Pasadena have run an annual program on interactive data visualization. The program asks: How can interactive data visualization help scientists and engineers explore their data more effectively? How can computing, design, and design thinking help maximize research results? What methodologies are most effective for leveraging knowledge from these fields? By encoding relational information with appropriate visual and interactive characteristics to help interrogate, and ultimately gain new insight into data, the program develops new interdisciplinary approaches to complex science problems, combining design thinking and the latest methods from computing, user-centered design, interaction design and 3D graphics.
Terminology
Data visualization involves specific terminology, some of which is derived from statistics. For example, author Stephen Few defines two types of data, which are used in combination to support a meaningful analysis or visualization:
Categorical: Represent groups of objects with a particular characteristic. Categorical variables can either be nominal or ordinal. Nominal variables for example gender have no order between them and are thus nominal. Ordinal variables are categories with an order, for sample recording the age group someone falls into.
Quantitative: Represent measurements, such as the height of a person or the temperature of an environment. Quantitative variables can either be continuous or discrete. Continuous variables capture the idea that measurements can always be made more precisely. While discrete variables have only a finite number of possibilities, such as a count of some outcomes or an age measured in whole years.
The distinction between quantitative and categorical variables is important because the two types require different methods of visualization.
Two primary types of information displays are tables and graphs.
A table contains quantitative data organized into rows and columns with categorical labels. It is primarily used to look up specific values. In the example above, the table might have categorical column labels representing the name (a qualitative variable) and age (a quantitative variable), with each row of data representing one person (the sampled experimental unit or category subdivision).
A graph is primarily used to show relationships among data and portrays values encoded as visual objects (e.g., lines, bars, or points). Numerical values are displayed within an area delineated by one or more axes. These axes provide scales (quantitative and categorical) used to label and assign values to the visual objects. Many graphs are also referred to as charts.
Eppler and Lengler have developed the "Periodic Table of Visualization Methods," an interactive chart displaying various data visualization methods. It includes six types of data visualization methods: data, information, concept, strategy, metaphor and compound. In "Visualization Analysis and Design" Tamara Munzner writes "Computer-based visualization systems provide visual representations of datasets designed to help people carry out tasks more effectively." Munzner agues that visualization "is suitable when there is a need to augment human capabilities rather than replace people with computational decision-making methods."
Techniques
Other techniques
Cartogram
Cladogram (phylogeny)
Concept Mapping
Dendrogram (classification)
Information visualization reference model
Grand tour
Graph drawing
Heatmap
HyperbolicTree
Multidimensional scaling
Parallel coordinates
Problem solving environment
Treemapping
Interactivity
Interactive data visualization enables direct actions on a graphical plot to change elements and link between multiple plots.
Interactive data visualization has been a pursuit of statisticians since the late 1960s. Examples of the developments can be found on the American Statistical Association video lending library.
Common interactions include:
Brushing: works by using the mouse to control a paintbrush, directly changing the color or glyph of elements of a plot. The paintbrush is sometimes a pointer and sometimes works by drawing an outline of sorts around points; the outline is sometimes irregularly shaped, like a lasso. Brushing is most commonly used when multiple plots are visible and some linking mechanism exists between the plots. There are several different conceptual models for brushing and a number of common linking mechanisms. Brushing scatterplots can be a transient operation in which points in the active plot only retain their new characteristics. At the same time, they are enclosed or intersected by the brush, or it can be a persistent operation, so that points retain their new appearance after the brush has been moved away. Transient brushing is usually chosen for linked brushing, as we have just described.
Painting: Persistent brushing is useful when we want to group the points into clusters and then proceed to use other operations, such as the tour, to compare the groups. It is becoming common terminology to call the persistent operation painting,
Identification: which could also be called labeling or label brushing, is another plot manipulation that can be linked. Bringing the cursor near a point or edge in a scatterplot, or a bar in a barchart, causes a label to appear that identifies the plot element. It is widely available in many interactive graphics, and is sometimes called mouseover.
Scaling: maps the data onto the window, and changes in the area of the. mapping function help us learn different things from the same plot. Scaling is commonly used to zoom in on crowded regions of a scatterplot, and it can also be used to change the aspect ratio of a plot, to reveal different features of the data.
Linking: connects elements selected in one plot with elements in another plot. The simplest kind of linking, one-to-one, where both plots show different projections of the same data, and a point in one plot corresponds to exactly one point in the other. When using area plots, brushing any part of an area has the same effect as brushing it all and is equivalent to selecting all cases in the corresponding category. Even when some plot elements represent more than one case, the underlying linking rule still links one case in one plot to the same case in other plots. Linking can also be by categorical variable, such as by a subject id, so that all data values corresponding to that subject are highlighted, in all the visible plots.
Other perspectives
There are different approaches on the scope of data visualization. One common focus is on information presentation, such as Friedman (2008). Friendly (2008) presumes two main parts of data visualization: statistical graphics, and thematic cartography. In this line the "Data Visualization: Modern Approaches" (2007) article gives an overview of seven subjects of data visualization:
Articles & resources
Displaying connections
Displaying data
Displaying news
Displaying websites
Mind maps
Tools and services
All these subjects are closely related to graphic design and information representation.
On the other hand, from a computer science perspective, Frits H. Post in 2002 categorized the field into sub-fields:
Information visualization
Interaction techniques and architectures
Modelling techniques
Multiresolution methods
Visualization algorithms and techniques
Volume visualization
Within The Harvard Business Review, Scott Berinato developed a framework to approach data visualisation. To start thinking visually, users must consider two questions; 1) What you have and 2) what you're doing. The first step is identifying what data you want visualised. It is data-driven like profit over the past ten years or a conceptual idea like how a specific organisation is structured. Once this question is answered one can then focus on whether they are trying to communicate information (declarative visualisation) or trying to figure something out (exploratory visualisation). Scott Berinato combines these questions to give four types of visual communication that each have their own goals.
These four types of visual communication are as follows;
idea illustration (conceptual & declarative).
Used to teach, explain and/or simply concepts. For example, organisation charts and decision trees.
idea generation (conceptual & exploratory).
Used to discover, innovate and solve problems. For example, a whiteboard after a brainstorming session.
visual discovery (data-driven & exploratory).
Used to spot trends and make sense of data. This type of visual is more common with large and complex data where the dataset is somewhat unknown and the task is open-ended.
everyday data-visualisation (data-driven & declarative).
The most common and simple type of visualisation used for affirming and setting context. For example, a line graph of GDP over time.
Applications
Data and information visualization insights are being applied in areas such as:
Scientific research
Digital libraries
Data mining
Information graphics
Financial data analysis
Health care
Market studies
Manufacturing production control
Crime mapping
eGovernance and Policy Modeling
Digital Humanities
Data Art
Organization
Notable academic and industry laboratories in the field are:
Adobe Research
IBM Research
Google Research
Microsoft Research
Panopticon Software
Scientific Computing and Imaging Institute
Tableau Software
University of Maryland Human-Computer Interaction Lab
Conferences in this field, ranked by significance in data visualization research, are:
IEEE Visualization: An annual international conference on scientific visualization, information visualization, and visual analytics. Conference is held in October.
ACM SIGGRAPH: An annual international conference on computer graphics, convened by the ACM SIGGRAPH organization. Conference dates vary.
Conference on Human Factors in Computing Systems (CHI): An annual international conference on human–computer interaction, hosted by ACM SIGCHI. Conference is usually held in April or May.
Eurographics: An annual Europe-wide computer graphics conference, held by the European Association for Computer Graphics. Conference is usually held in April or May.
For further examples, see: :Category:Computer graphics organizations
Data presentation architecture
Data presentation architecture (DPA) is a skill-set that seeks to identify, locate, manipulate, format and present data in such a way as to optimally communicate meaning and proper knowledge.
Historically, the term data presentation architecture is attributed to Kelly Lautt: "Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value of Business Intelligence. Data presentation architecture weds the science of numbers, data and statistics in discovering valuable information from data and making it usable, relevant and actionable with the arts of data visualization, communications, organizational psychology and change management in order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA."
Objectives
DPA has two main objectives:
To use data to provide knowledge in the most efficient manner possible (minimize noise, complexity, and unnecessary data or detail given each audience's needs and roles)
To use data to provide knowledge in the most effective manner possible (provide relevant, timely and complete data to each audience member in a clear and understandable manner that conveys important meaning, is actionable and can affect understanding, behavior and decisions)
Scope
With the above objectives in mind, the actual work of data presentation architecture consists of:
Creating effective delivery mechanisms for each audience member depending on their role, tasks, locations and access to technology
Defining important meaning (relevant knowledge) that is needed by each audience member in each context
Determining the required periodicity of data updates (the currency of the data)
Determining the right timing for data presentation (when and how often the user needs to see the data)
Finding the right data (subject area, historical reach, breadth, level of detail, etc.)
Utilizing appropriate analysis, grouping, visualization, and other presentation formats
Related fields
DPA work shares commonalities with several other fields, including:
Business analysis in determining business goals, collecting requirements, mapping processes.
Business process improvement in that its goal is to improve and streamline actions and decisions in furtherance of business goals
Data visualization in that it uses well-established theories of visualization to add or highlight meaning or importance in data presentation.
Digital humanities explores more nuanced ways of visualising complex data.
Information architecture, but information architecture's focus is on unstructured data and therefore excludes both analysis (in the statistical/data sense) and direct transformation of the actual content (data, for DPA) into new entities and combinations.
HCI and interaction design, since many of the principles in how to design interactive data visualisation have been developed cross-disciplinary with HCI.
Visual journalism and data-driven journalism or data journalism: Visual journalism is concerned with all types of graphic facilitation of the telling of news stories, and data-driven and data journalism are not necessarily told with data visualisation. Nevertheless, the field of journalism is at the forefront in developing new data visualisations to communicate data.
Graphic design, conveying information through styling, typography, position, and other aesthetic concerns.
See also
Analytics
Big data
Climate change art
Color coding in data visualization
Computational visualistics
Information art
Data management
Data physicalization
Data Presentation Architecture
Data profiling
Data warehouse
Geovisualization
Grand Tour (data visualisation)
imc FAMOS (1987), graphical data analysis
Infographics
Information design
Information management
List of graphical methods
List of information graphics software
List of countries by economic complexity, example of Treemapping
Patent visualisation
Software visualization
Statistical analysis
Visual analytics
Warming stripes
Notes
References
Further reading
Kawa Nazemi (2014). Adaptive Semantics Visualization Eurographics Association.
Andreas Kerren, John T. Stasko, Jean-Daniel Fekete, and Chris North (2008). Information Visualization – Human-Centered Issues and Perspectives. Volume 4950 of LNCS State-of-the-Art Survey, Springer.
Spence, Robert Information Visualization: Design for Interaction (2nd Edition), Prentice Hall, 2007, .
Jeffrey Heer, Stuart K. Card, James Landay (2005). "Prefuse: a toolkit for interactive information visualization" . In: ACM Human Factors in Computing Systems CHI 2005.
Ben Bederson and Ben Shneiderman (2003). The Craft of Information Visualization: Readings and Reflections. Morgan Kaufmann.
Colin Ware (2000). Information Visualization: Perception for design. Morgan Kaufmann.
Stuart K. Card, Jock D. Mackinlay and Ben Shneiderman (1999). Readings in Information Visualization: Using Vision to Think, Morgan Kaufmann Publishers.
Schwabish, Jonathan A. 2014. "An Economist's Guide to Visualizing Data." Journal of Economic Perspectives, 28 (1): 209–34.
External links
Milestones in the History of Thematic Cartography, Statistical Graphics, and Data Visualization, An illustrated chronology of innovations by Michael Friendly and Daniel J. Denis.
Duke University-Christa Kelleher Presentation-Communicating through infographics-visualizing scientific & engineering information-March 6, 2015
Visualization (graphics)
Statistical charts and diagrams
Information technology governance
de:Informationsvisualisierung | 0.765341 | 0.995247 | 0.761704 |
Thematic learning | Thematic teaching (also known as thematic instruction) is the selecting and highlighting of a theme through an instructional unit or module, course, or multiple courses. It is often interdisciplinary, highlighting the relationship of knowledge across academic disciplines and everyday life. Themes can be topics or take the form of overarching questions. Thematic learning is closely related to interdisciplinary or integrated instruction, topic-, project- or phenomenon-based learning. Thematic teaching is commonly associated with elementary classrooms and middle schools using a team-based approach, but this pedagogy is equally relevant in secondary schools and with adult learners. A common application is that of second or foreign language teaching, where the approach is more commonly known as theme-based instruction. Thematic instruction assumes students learn best when they can associate new information holistically with across the entire curriculum and with their own lives, experiences, and communities.
Steps
Under the thematic learning instruction, organization of curriculum can be based on a macro or micro theme, depending upon the topic to be covered.
Choosing a theme: Themes about the particular topic should be of interest to students and relevant to the curriculum. In some approaches, students choose the thematic topic. Themes should also be topics of interest to the teacher(s) because successful thematic instructions often requires additional research and preparation. Interdisciplinary themes related to multiple academic disciplines such as science, social studies, math, language/writing, and other courses or subjects can be reinforced in lessons throughout the school day.
Themes relevant to students' interests encourage active participation. For example, students may express interest in current popular music. This interest can be developed into thematic instructional units and lessons that span across time and cultures, how cultures interact and impact one another, music as a social or political commentary in social studies or history classes.
Themes allowing past to present connections and highlight persistent issued faced by society such as war, poverty, pollution, disease, or natural disasters are especially effective.
Doing the research: Effective interdisciplinary thematic instruction requires extensive knowledge and research by the teacher. Without a broad knowledge base on which to design relevant activities and lessons, thematic lessons can become randomly selected activities loosely related to a topic that fail to demand higher level thinking from students.
Design an essential question(s) relevant to the theme. Essential questions are open-ended, intellectually engaging questions that demand higher-order thinking. Essential questions focus a thematic inquiry, helping the teacher chose the most important facts and concepts relative to the theme and focus planning efforts. Essential questions require students to learn the key facts and concepts related to the theme as well as analyze and evaluate the importance and relevance of that information. Good essential questions cannot be answered with a simple yes/no or true/false; students must discuss, defend, and debate issues related to the theme. Designing thematic instruction around essential questions requires that students learn both content and develop critical analysis skills.
Designing instructional units and activities that guide students in answering the essential question. Teachers must choose teaching and learning strategies, activities, classroom materials, and experiences related to the wider theme and guide students in answering the essential question. Strategies can be individual or cooperative; stress various skills such as reading, writing, or presenting.
Curriculum
For thematic learning to be successful among learners, the following should be considered:
Thematic learning consists of a curriculum that is unified and dwells on an identified theme or topic, ideally guided by essential questions.
The sources are not limited to textbooks. For example, in the social studies or history classroom, primary source texts and images encourage the development of critical reading skills. For themes related to current events, analysis of modern media hones media literacy skills.
Various teaching and learning methods can be used. Projects, cooperative learning, active participation, experiential learning are often highlighted.
Thinking and problem solving skills, observation, critical reasoning, analysis and drawing conclusions are key skills in thematic learning.
Advantages
Students learn better when experiencing knowledge in a larger context. They begin to see relationships and connections across time, place, and disciplines.
Learning about wider themes and related concepts and facts more closely resembles how life is experienced outside of school and the classroom.
Themes can be chosen that are current and student-centered, incorporating the needs, interests and perspectives of the students.
Carefully selecting topics and information related to a theme helps teachers narrow the overwhelming amount of information of any discipline.
Thematic instructions aligns with current popular pedagogies and standards including place-based education, project-based education, and cooperative learning.
When thematic instruction takes place along with cooperative learning, the advantages include the following:
Thematic cooperative learning activities encourage authentic communication.
The learner shares one's ideas with others in the group.
Interaction encourages the values of respect and cooperation, thus building effective peer learning groups.
The teacher becomes the facilitator, reduces the role of dispenser of learning.
See also
Interdisciplinary teaching
References
Education theory
Pedagogy
Learning methods | 0.786672 | 0.968234 | 0.761683 |
Harkness table | The Harkness table, Harkness method, or Harkness discussion is a teaching and learning method involving students seated in a large, oval configuration to discuss ideas in an encouraging, open-minded environment with only occasional or minimal teacher intervention.
Overview
The Harkness method is in use at many American boarding schools and colleges and encourages discussion in classes. The style is related to the Socratic method. Developed at Phillips Exeter Academy, the method's name comes from the oil magnate and philanthropist Edward Harkness, who presented the school with a monetary gift in 1930. It has been adopted in numerous schools, such as The Dunham School, St. Mark's School of Texas, Milton Academy, The College Preparatory School, The Masters School, and Seoul Foreign School where small class-size makes it effective. However, Harkness remains impractical for schools with larger class sizes. Harkness described its use as follows:
What I have in mind is [a classroom] where [students] could sit around a table with a teacher who would talk with them and instruct them by a sort of tutorial or conference method, where [each student] would feel encouraged to speak up. This would be a real revolution in methods.
Harkness practices can vary, most notably between humanities subjects such as English and history and technical subjects such as math and physics.
References
External links
'Edward S. Harkness, 1874-1940', Richard F. Niebling, Phillips Exeter Academy (PDF)
Teaching in the United States
Phillips Exeter Academy | 0.770361 | 0.98871 | 0.761663 |
Postmodernism, or, the Cultural Logic of Late Capitalism | Postmodernism, or, the Cultural Logic of Late Capitalism is a 1991 book by Fredric Jameson, in which the author offers a critique of modernism and postmodernism from a Marxist perspective. The book began as a 1984 article in the New Left Review. It has been presented as his "most wide-ranging and accessible book".
Overview
Jameson defines postmodernism as the cultural system of a global, financialized stage of capitalist society. Jameson argues that postmodernism is characterized by a "crisis of historicity", a "waning of affect", and a prevalence of pastiche. He traces these characteristics of postmodernism across a variety of fields and media, including film, television, literature, economics, architecture, and philosophy. In one of his most prominent examples, he draws out the differences between modernism and postmodernism by comparing Van Gogh's "Peasant Shoes" with Andy Warhol's "Diamond Dust Shoes". For Jameson, postmodernism, as a form of mass-culture driven by capitalism, pervades every aspect of our daily lives.
Background and analysis of postmodernism
In 1984, during his tenure as Professor of Literature and History of Consciousness at the University of California, Santa Cruz, Jameson published an article titled "Postmodernism, or, the Cultural Logic of Late Capitalism" in the journal New Left Review. This controversial article, which Jameson later expanded into a book, was part of a series of analyses of postmodernism from the dialectical perspective Jameson had developed in his earlier work on narrative. Jameson viewed the postmodern "skepticism towards metanarratives" as a "mode of experience" stemming from the conditions of intellectual labor imposed by the late capitalist mode of production.
Postmodernists claimed that the complex differentiation between "spheres" or fields of life (such as the political, the social, the cultural, the commercial), and between distinct social classes and roles within each field, had been overcome by the crisis of foundationalism and the consequent relativization of truth-claims. For example, in The Postmodern Condition: A Report on Knowledge (1979), which helped establish the term "postmodernism", Jean-François Lyotard described a shaken or failed public trust in the promise of enlightenments, faiths, or governments, with their metanarratives of epistemic or historical progress, leaving individuals to their own experiences. This was sometimes criticized as a metanarrative about the end of metanarratives and therefore considered ironic or paradoxical.
Jameson argued against postmodernists, asserting that these phenomena had or could have been understood successfully within a modernist framework; the postmodern failure to achieve this understanding implied an abrupt break in the dialectical refinement of thought. In his view, postmodernity's merging of all discourse into an undifferentiated whole was the result of the colonization of the cultural sphere, which had retained at least partial autonomy during the prior modernist era, by a newly organized corporate capitalism. Following Adorno and Horkheimer's analysis of the culture industry, Jameson discussed this phenomenon in his critical discussion of architecture, film, narrative, and visual arts, as well as in his strictly philosophical work.
Two of Jameson's best-known claims from Postmodernism are that post-modernity is characterized by "pastiche" and a "crisis in historicity". Jameson argues that parody (which implies a moral judgment or a comparison with societal norms) was replaced by pastiche (collage and other forms of juxtaposition without a normative grounding). Jameson recognizes that modernism frequently "quotes" from different cultures and historical periods, but he argues that postmodern cultural texts indiscriminately cannibalize these elements, erasing any sense of critical or historical distance and resulting in pure pastiche. Relatedly, Jameson argues that the postmodern era suffers from a crisis in historicity: "there no longer does seem to be any organic relationship between the American history we learn from schoolbooks and the lived experience of the current, multinational, high-rise, stagflated city of the newspapers and of our own everyday life".
Jameson's analysis of postmodernism attempts to view it as historically grounded; he therefore explicitly rejects any moralistic opposition to postmodernity as a cultural phenomenon, and continued to insist upon a Hegelian immanent critique that would "think the cultural evolution of late capitalism dialectically, as catastrophe and progress all together". His refusal to simply dismiss postmodernism from the onset, however, was misinterpreted by some Marxist intellectuals as an implicit endorsement of postmodern views.
Table of contents
Contents:
The Cultural Logic of Late Capitalism: pp. 1–54.
Theories of the Postmodern: 55–66.
Surrealism Without the Unconscious: 67–96.
Spatial Equivalents in the World System: 97–129.
Reading and the Division of Labor: 131–153.
Utopianism After the End of Utopia: 154–180.
Immanence and Nominalism in Postmodern Theoretical Discourse: 181–259.
Postmodernism and the Market: 260–278.
Nostalgia for the Present: 279–296.
Secondary Elaborations: 297–418.
[Source]
See also
Late capitalism
Notes
References
External links
Postmodernism, or, the Cultural Logic of Late Capitalism, parts of chapter one
1991 non-fiction books
Books about globalization
Books by Fredric Jameson
20th century in philosophy
Continental philosophy literature
Duke University Press books
English-language books
Marxist books
Political philosophy literature
Works about postmodernism | 0.764643 | 0.996085 | 0.761649 |
Netnography | Netnography is a "form of qualitative research that seeks to understand the cultural experiences that encompass and are reflected within the traces, practices, networks and systems of social media". It is a specific set of research practices related to data collection, analysis, research ethics, and representation, rooted in participant observation that can be conceptualized into three key stages: investigation, interaction, and immersion. In netnography, a significant amount of the data originates in and manifests through the digital traces of naturally occurring public conversations recorded by contemporary communications networks. Netnography uses these conversations as data. It is an interpretive research method that adapts the traditional, in-person participant observation techniques of anthropology to the study of interactions and experiences manifesting through digital communications .
The term netnography is a portmanteau combining "Internet" or "network" with "ethnography". Netnography was originally developed in 1995 by marketing professor Robert Kozinets as a tool to analyze online fan discussions about the Star Trek franchise. The use of the method spread from marketing research and consumer research to a range of other disciplines, including education, library and information sciences, hospitality, tourism, computer science, psychology, sociology, anthropology, geography, urban studies, leisure and game studies, and human sexuality and addiction research.
Netnography and ethnography
Though netnography is developed from ethnography and applied in the online settings, it is more than the application of qualitative research in the form of traditional ethnographic techniques in an online context. There are several characters that differentiate netnography from ethnography.
Research focus. Netnographic research is more focused on reflections and data provided by online communities, whereas ethnography can focus on the entire human society.
Communication focus. Ethnography comprises research into all forms of human communication, including body language and tone of voice. Netnography incorporates human online communication, which is textual communication, or some multimedia communication such as video, audio, pictures.
Research method. Netnography offers a less intrusive research experience than ethnography, because netnography uses mainly observational data. Netnography is more naturalistic than personal interviews, focus groups, surveys, and experiments, which the qualities are largely influenced by the researcher. In addition, participants may alter their reactions/answers when involving in the interviews, focus group and surveys. The main advantage of netnography is that individuals reveal information, including sensitive details, unasked and voluntarily online naturally, and the netnographer could gain this organic information through observation.
Data collection. Compared with traditional ethnography which requires researchers physically immerse into the samples to collect data, netnographic researchers are able to download communication data directly from an online community. Netnographic researchers do not become members of communities and cultures as in traditional ethnographic practice, but are instead engaged in various and flexible levels of committed and public online social interaction, thus immerse oneself in the community. Thus ethnography usually collects real-life observation and primary data, and netnography usually collects computer-based and secondary data
Efficiency. Netnography tends to be less costly and timelier than many other methods because it leverages online archives and existing technologies to rapidly and efficiently gather and sort relevant data. Netnographic research is faster and cheaper in comparison with ethnographic research.
Number of participants. Netnography enables the researcher to investigate a large number of people, even more than when using ethnography.
Retroactivity. Netnography could trace back conversations several years ago so that allow researchers understand the history or the development of a topic/community, but ethnography can only study the current situation.
Netnography is also similar to ethnography in these ways:
It is naturalistic: it seeks to study online social interaction by participating within and observing it;
It is immersive: it involves the researcher as the key element in data collection and creation;
It is descriptive: it seeks rich contextual portrayals of the lived experience of online social life;
It is multi-method: it can involve a range of other methods, such as interviews, semiotic visual analysis, and data science; and
It is adaptable: it can be used to study many types of online sites and technology-related communications and interaction
Keys components
Key components of netnography include emotion/story, the researcher, key source person, and cultural fluency.
Emotion and story
Netnography combines rich samples of communicative and interactions flowing through the internet: textual, graphic, audio, photographic and & audio-visual. The data then will be analysed using content analysis, semiotic visual analysis, interviews (online and in person), social network analysis and the use of big data analytic tools and techniques. These techniques are employed to find the emotional story behind a subject.
This what differentiate netnography to big-data analysis that often relies on machine (sentiment analysis, word cloud) and also to digital ethnography or digital anthropology. These terms (netnography, digital ethnography, and digital anthropology) are often used interchangeably, but they are very different.
The difference between netnography and digital ethnography could be seen in several ways, but the most obvious one is the research motivation and methodology determined by the purpose. Netnography focuses on internet users forming an online community which is highlighted from the substantial daily life, while digital ethnography only treat the digital world as a place to extend their offline data collection to complement the ethnographic research. The methodological framework between them are not typically different, since netnography mainly use online qualitative techniques and use online quantitative research as a supplement occasionally, while digital ethnography combines both quantitative (e.g., network and co-word analysis) and qualitative (e.g., sentiment and content analysis) techniques.
To find the emotional story, big data analysis is often used as a complementary technique, usually at the beginning of the research. However, instead of scooping a huge amount of data and relying on machine to analyse it, the strength of netnography is contextualized data, human-centered analysis, and resonant representation.
Researcher
The researcher is not simply a person who knows how to run a specific software but a living, breathing individual whose personality will enrich the research. In netnography, to find the necessary emotion, the story behind the individuals, the researcher has to have a deep understanding of the culture that surrounds the data that she uses. They have to immerse themself in the community where they source their data. A human being is a very complex being, and the language that we use, regardless of the language itself, has depth. It has nuance, symbolism, sarcasm, to name a few. Not to mention context. What is acceptable or positive in one culture might be the total opposite in others. Unearthing the layers is a complicate and delicate process no algorithm can currently perform.
For example, if a researcher wished to understand the sentiment of a brand's customers or potential consumers towards a specific brand, the easiest thing to do is perhaps analyze the comments section of the brand's website. However, should there be a substantial number of comments that are using sarcastic language, solely using a machine-generated algorithm will give the wrong conclusion.
Key source person
The key to understand the culture is to find rich data from a key source person, the third factor of netnography. Using the same examples, to find the reason behind perception of brand or the reason behind a brand loyalty, a netnographer needs to comb through the comments section to find the gold mine.
One examples of a gold mine is a genuine comment written by a person with a very strong emotions towards the brand either positive or negative. On the other hand, the netnographer may find a person who either loves or hates the brand with every fiber of their being. The netnographer should find this data and analyze it. This small but in-depth data could be the answer to the research question.
Cultural fluency
The goal of a netnographer is cultural fluency. Cultural fluency means that at the end of the research, the researcher should be fluent in the symbolic language of the site and even so knowledgeable about the users that they have an almost biographical authority regarding them.
Cultural meaning(s) embedded in the Internet
Unlike the fetishization of big data and its attempt to portray a generic, characterization of markets in online communities (i.e., frequency of brand engagement), netnography enables researchers "to argue for a central tenet" (Kozinets, 2016, p. 2) that emerges from the collected data that represents a particular market. Netnography has an advantage over ethnography in that it focuses primarily on the context of textual communication and any affiliated multimedia elements, whereas ethnography focuses primarily on physical forms of human communication (e.g., body language) (Bartl et al., p. 168). Since Netnography uses spontaneous data and conducts observation without intruding online users, it is regarded as more naturalistic than other approaches such as interviews, focus groups, surveys and experiments (Kozinets, 2015). While online communication has a relatively shorter duration in efficiency when compared to human communication, the speed in collecting online communication is much faster and far less expensive than traditional in-person ethnography and other qualitative methodologies like focus groups or interviews . It is also a challenging approach involving work to tackle unpredictable and abundant data (Kozinets, 2015).
The need to understand the cultural meaning of online communities (e.g., Reddit; LinkedIn) has grown exponentially since the appraisal of Web 2.0 interfaces (i.e., user-generated content), along with other technological advances. One can no longer assume that people are isolating themselves from the physical world with technology, but rather view technology such as computer-mediated communication and digital information as a gateway that allows them to interact with familiar and, at times, anonymous users on a given occasion. Furthermore, cultural practices within the physical world are extended to, and enhanced by, these online communities, where people can choose a dating partner, learn about a religion and make brand choices, just to name a few examples. With ethnography's influence on netnography, this research method enables the researcher to link the communication patterns in order to understand the tacit and latent practices involved within and between these online communities of interest (Mariampolski, 2005). As Kozinets pointed out, "these social groups have a 'real' existence for their participants, and thus have consequential effects on many aspects of behaviour, including consumer behavior" (see also Muniz and O'Guinn, 2001).
People participating in these online communities often share in-depth insights on themselves, their lifestyles, and the reasons behind the choices they make as consumers (brands, products etc.). Such insights have the potential of becoming something actionable. More specifically, this means that the researcher will be able to present an unknown and unseen truth to his/her client (Cayla & Arnold, 2013) so that they are able to make better decisions in engaging with a target community, whether it be in a form of an advertising or a non-profit campaign. While netnography has been predominantly applied within the field of marketing (Bengry-Howell, 2011), its methods can help researchers and their clients within social sciences to create an empathetic understanding of people's cultural behavior via online, and to allow the researcher and clients to 'immerse themselves' in the consumer domain (Kozinets, 2002; Piller et al., 2011; in Bartl et al., 2016, p. 167). The following information provides a systematic process to search for, collect and analyze data (Bartl et al., 2016, p. 168; see also Kozinets, 2000, 2010)
Define the research field. Develop a detailed research question(s) that allows the researcher to qualitatively find patterns.
Communication identification and selection. Use online search engines in order to identify appropriate, research-related online communities, which the researcher will then need to analyze and select details about the community, its members, and its forum.
Community observation and data collection. Observe the selected online communities in a non-participatory, non-biased manner. The researcher will then need to retrieve data from people's communication and data from personal observation.
Data analysis. Analyze data with automated software and manual methods in order to uncover patterns from the data analyses.
Research ethics. With regards to ethics, be vigilant in ensuring the online community members' anonymity and confidentiality.
Finding and solutions. Apply an empathetic perspective in order to obtain a deep understanding about the people of interest in order for the solutions to be well translated and trustworthy.
Netnography offers a range of new insights for front end innovation, providing:
Holistic marketplace descriptions
Communicative and cultural comprehension
Embedded understanding of consumer choice
Naturalistic views of brand meaning
Discovery of consumer innovation
Mappings of sociocultural online space
Data collection
Netnography collects data from Internet data, interviews data and fieldnotes
Internet data: Researchers should spend the time to match their research questions and interests to appropriate online forum, using the novel resources of online search engines such as Yahoo! and Google groups, before initiating entrée. Before initiating contact as a participant, or beginning formal data collection, the distinctive characteristics of the online communities should be familiar to the netnographer.
Interview data: The interview can be conducted via email, Skype, in person, or by using other methods. Netnography’s emphasis on Internet data does not ameliorate the need to establish data in context and to extend understanding of those data into related concepts, archives, communications, and sites.
Fieldnotes: Reflective fieldnotes, in which ethnographers record their observations, are a time-tested and recommended method in netnography. Although some netnographies have been conducted using only observation and download, without the researcher writing a single fieldnote, this non-participant approach draws into question the ethnographic orientation of the investigation.
As with grounded theory, data collection should continue as long as new insights are being generated. For purposes of precision, some netnographers closely track the amount of text collected and read, and the number of distinct participants. CAQDAS software solutions can expedite coding, content analysis, data linking, data display, and theory-building functions. New forms of qualitative data analysis are constantly being developed by a variety of firms (such as MotiveQuest and Neilsen BuzzMetrics), although the results of these firms are more like content analyses of than ethnographic representations . Netnography and content analysis differed in the adoption of computational methods for collecting semi-automated data, analyzing data, recognizing words and visualizing data (Kozinets, 2016). However, some scholars dispute netnography's distance from content analysis, preferring to assert that it is also a content analytic technique .
Data analysis
Distinct from data mining and content analysis, netnography as a method emphasizes the cultural contextualizing of online data. This often proves to be challenging in the social-cues-impoverished online context. Because netnography is based primarily upon the observation of textual discourse, ensuring trustworthy interpretations requires a different approach than the balancing of discourse and observed behavior that occurs during in-person ethnography. Although the online landscape mediates social representation and renders problematic the issue of informant identity, netnography seems perfectly amenable to treating behavior or the social act as the ultimate unit of analysis, rather than the individual person.
Research ethics
Research ethics may be one of the most important differences between traditional ethnography and netnography. Ethical concerns over netnography turn on early concerns about whether online forums are to be considered a private or a public site, and about what constitutes informed consent in cyberspace (see ). In a major departure from traditional methods, netnography uses cultural information that is not given specifically, and in confidence, to the researcher. The consumers who originally created the data do not necessarily intend or welcome its use in research representations. Netnography therefore offers specific guidelines regarding when to cite online posters and authors, how to cite them, what to consider in an ethical netnographic representation, when to ask permission, and when permission is not necessary (; cf. ).
Advantages and limitations
Compared to surveys, experiments, focus groups, and personal interviews, netnography can be less obtrusive. It is conducted using observations in a context that is not fabricated by the researcher. Netnography also is less costly and timelier than focus groups and personal interviews.
The limitations of netnography draw from the need for researcher interpretive skill, and the lack of informant identifiers present in the online context that can lead to difficulty generalizing results to groups outside the sample. However, these limitations can be ameliorated somewhat by careful use of convergent data collection methods that bridge offline and online research in a systematic manner, as well as by careful sampling and interpretive approaches (, 2002). Researchers wishing to generalize the findings of a netnography of a particular online group to other groups must apply careful evaluations of similarity and consider using multiple methods for research triangulation. Netnography is still a relatively new method, and awaits further development and refinement at the hands of a new generation of Internet-savvy ethnographic researchers. However, several researchers are developing the techniques in social networking sites, virtual worlds, mobile communities, and other novel computer-mediated social domains.
Sample netnographic analysis
Below are listed five different types of online community from a netnographic analysis by Kozinets (see Kozinets ref. below for more detail). Even though the technologies, and the use of these technologies within culture, is evolving over time, the insights below have been included here in order to show an example of what a market-oriented "netnography" looked like:
bulletin boards, which function as electronic bulletin boards (also called newsgroups, usegroups, or usenet groups). These are often organized around particular products, services or lifestyles, each of which may have important uses and implications for marketing researchers interested in particular consumer topics (e.g., McDonald's, Sony PlayStation, beer, travel to Europe, skiing). Many consumer-oriented newsgroups have over 100,000 readers, and some have over one million .
Independent web pages as well as web-rings composed of thematically-linked World Wide Web pages. Web-pages such as epinions ([www.epinions.com]) provide online community resources for consumer-to-consumer exchanges. Yahoo!'s consumer advocacy listings also provide useful listing of independent consumer web-pages. Yahoo! also has an excellent directory of web-rings ([www.dir.webring.yahoo.com]).
lists (also called listservs, after the software program), which are e-mail mailing lists united by common themes (e.g., art, diet, music, professions, toys, educational services, hobbies). Some good search engines of lists are [www.egroups.com] and [www.liszt.com].
multi-user dungeons and chat rooms tend to be considerably less market-oriented in their focus, containing information that is often fantasy-oriented, social, sexual and relational in nature. General search engines (e.g., Yahoo! or excite) provide good directories of these communities. Dungeons and chat rooms may still be of interest to marketing researchers (see, e.g., ) because of their ability to provide insight into particular themes (e.g., certain industry, demographic or lifestyle segments). However, many marketing researchers will find the generally more focused and more information-laden content provided by the members of boards, rings and lists to be more useful to their investigation than the more social information present in dungeons and chat rooms.
social media platforms. Unprecedented changes in the current communications ecology demand attention to social media analytics as a way to gain access to data and facilitate useful insights for organizations in building customer service, loyalty, advocacy, and real-time participation. Social monitoring software like Radian6, Hootsuite, and Google Analytics can help provide data that a netographer then curates and analyzes, outside the use of pie graphs and word clouds, to find the deeper meaning in order direct a company, brand, or advocacy group, to the opportunities and trends that are marketable. Netnographers can use this type of social media listening to draw actionable insights for a current customer or consumer base.
Phases in conducting netnography
As research practice, netnography has 12 roughly temporal, nonexclusive and often interacting process levels (Kozinets, 2015):
Introspection phase: The researcher must reflect upon the role of the research in her current life project and life themes, and her actual life story as it unfolds.
Investigation phase: The researcher devise and sharpen the netnographic research question, basing it upon the study of sites, topics or people, posing it appropriately, such that it can be reasonably answered by a netnographic approach.
Informational phase: The researcher should raise ethical considerations early and be aware of acceptable research ethics practices.
Interview phase: A good range of people or sites are found to investigate and then interviewed and found to match to various online forms of sociality and satisfaction.
Inspection phase: The research makes the choice on particular site or sites to investigate. Different sorts of site, topic, person and even group combinations schemes are possible and useful.
Interaction phase: The extent of the researcher's participation in online social interactions is plotted out. Creating an interaction research website that is open, generous and ethical is strongly recommended.
Immersion phase: Depth of understanding grows organically in a natural unfolding of what feels like 'human' time through the immersion in the data, topic or site on a frequent basis.
Indexing phase: An adequate amount of data is collected from a relevant variety of relevant sources. The researcher should focus on small data. She should carefully select lesser amounts of very high quality data that are used to reveal and highlight meaningful aspects of the particular.
Interpretation phase: Interpretive analysis, or "interpenetration" is conducted as a striving for depth of understanding. Humanistic, phenomenological, existential and hermeneutic methods are favored and a variety of language theories are usefully applied.
Iteration phase: The researcher is interpreting continuously and seeking insights, general rules, patterns, research question saturation. She goes back to the field site, the data and the literature in a spiralling-in cycle looking for contributions, answers, representations, ideas and questions.
Instantiation phase: A netnography is instantiated in space and on time in a specific manner. It can take the form of one of the four ideal types (symbolic, digital, auto or humanistic) to guide the instantiated representation.
Integration phase: The result of the netnography is detected or measured. The final phase is part of its ongoing life in the world. It deals with the integration of findings and discussions with recommended action in the wider world.
Four types of netnography
According to Kozinets, any netnography will fall into one of four categories: auto, symbolic, digital or humanist. These types of netnography are defined by distinctive axiologies and foci. In order to visualize how a netnography is defined one should Imagine a simple 2X2 figure. Along the figure's x-axis we see that a netnography can be defined by whether or not it supports or challenges the status quo of business and management. In this way we determine a netnography's axiological representation orientation as either "critical", meant to disrupt, or "complementary", meant to assist in decision making. If we turn our focus to the y-axis of our imaginary figure, we see then that a netnography can also be categorized by its analytic field focus, or what it examines based on its orientation. A netnography can be deemed "global" if its focus is on a larger and more general system, or we can think of it as "local" if it narrows its scope to particular iterations of that more general system.
Through the combination these distinct parameters we can end up with the four types of netnography:
Auto-netnography: Is the critical and local form of netnography due to the fact that the researcher must render the data through their own identity. It can be thought of as an adaption of auto-ethnography as it also contains personal and auto-biographical elements. However, an auto-netnography must also possess a distinctly critical element in its understanding of the netnographers own position in time suffused with technologically mediated communication.
Symbolic netnography: The most commonly used version of netnography, it is both local and complementary. Utilizes social media information and interaction to render identities around individuals or websites in order to inform business decision making. It tends to focus on a particular group or field site and illustrate the group's practices, meanings and generate a more action based understanding of particular consumers.
Digital netnography: Sits on the intersection of complementary axiology and global focus. Connects statistical data analysis with cultural understandings, meaning it encompasses a large amount of social data, but always with drive toward deeper cultural understanding, rather than just statistical trends. Along with symbolic netnography, digital netnography looks to reinforce existing business, management and social practices.
Humanist netnographies: Focused on research questions with deep social import. Utilizes social media data to attempt to answer these questions and influence social change. Places the researcher firmly in the position of an advocate, and can even push him into activism.
Netnography application
The main application of netnographic market research is as a tool to explore consumer behaviour by understanding customers and listening to their voice
Netnography aids the identification of lead users and the prediction of trends.
Netnography serves as an effective driver of innovation and new product development. Example: Nivea White and Black Deodorant
Netnography can also be used to understand infrastructures, networks, groups, and any relevant constituent’s online behaviors, and potentially inform us about many elements of their overall lifeworld. Example: Online conversions to Islam
Notes
References
Bartl, Michael; Kannan, Vijai K.; Stockinger, Hanna (2016). A review and analysis of literature on netnography research. International Journal of Technology Marketing. Vol. 11, No. 2, 2016. pp. 165–196.
Further reading
(First print appearance of netnography method)
External links
A Brief Introduction to Netnography (slides)
Ethnography
Qualitative research | 0.777756 | 0.979259 | 0.761624 |
Paradigm shift | A paradigm shift is a fundamental change in the basic concepts and experimental practices of a scientific discipline. It is a concept in the philosophy of science that was introduced and brought into the common lexicon by the American physicist and philosopher Thomas Kuhn. Even though Kuhn restricted the use of the term to the natural sciences, the concept of a paradigm shift has also been used in numerous non-scientific contexts to describe a profound change in a fundamental model or perception of events.
Kuhn presented his notion of a paradigm shift in his influential book The Structure of Scientific Revolutions (1962).
Kuhn contrasts paradigm shifts, which characterize a Scientific Revolution, to the activity of normal science, which he describes as scientific work done within a prevailing framework or paradigm. Paradigm shifts arise when the dominant paradigm under which normal science operates is rendered incompatible with new phenomena, facilitating the adoption of a new theory or paradigm.
As one commentator summarizes:
History
The nature of scientific revolutions has been studied by modern philosophy since Immanuel Kant used the phrase in the preface to the second edition of his Critique of Pure Reason (1787). Kant used the phrase "revolution of the way of thinking" to refer to Greek mathematics and Newtonian physics. In the 20th century, new developments in the basic concepts of mathematics, physics, and biology revitalized interest in the question among scholars.
Original usage
In his 1962 book The Structure of Scientific Revolutions, Kuhn explains the development of paradigm shifts in science into four stages:
Normal science – In this stage, which Kuhn sees as most prominent in science, a dominant paradigm is active. This paradigm is characterized by a set of theories and ideas that define what is possible and rational to do, giving scientists a clear set of tools to approach certain problems. Some examples of dominant paradigms that Kuhn gives are: Newtonian physics, caloric theory, and the theory of electromagnetism. Insofar as paradigms are useful, they expand both the scope and the tools with which scientists do research. Kuhn stresses that, rather than being monolithic, the paradigms that define normal science can be particular to different people. A chemist and a physicist might operate with different paradigms of what a helium atom is. Under normal science, scientists encounter anomalies that cannot be explained by the universally accepted paradigm within which scientific progress has thereto been made.
Extraordinary research – When enough significant anomalies have accrued against a current paradigm, the scientific discipline is thrown into a state of crisis. To address the crisis, scientists push the boundaries of normal science in what Kuhn calls “extraordinary research”, which is characterized by its exploratory nature. Without the structures of the dominant paradigm to depend on, scientists engaging in extraordinary research must produce new theories, thought experiments, and experiments to explain the anomalies. Kuhn sees the practice of this stage – “the proliferation of competing articulations, the willingness to try anything, the expression of explicit discontent, the recourse to philosophy and to debate over fundamentals” – as even more important to science than paradigm shifts.
Adoption of a new paradigm – Eventually a new paradigm is formed, which gains its own new followers. For Kuhn, this stage entails both resistance to the new paradigm, and reasons for why individual scientists adopt it. According to Max Planck, "a new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." Because scientists are committed to the dominant paradigm, and paradigm shifts involve gestalt-like changes, Kuhn stresses that paradigms are difficult to change. However, paradigms can gain influence by explaining or predicting phenomena much better than before (i.e., Bohr's model of the atom) or by being more subjectively pleasing. During this phase, proponents for competing paradigms address what Kuhn considers the core of a paradigm debate: whether a given paradigm will be a good guide for problems – things that neither the proposed paradigm nor the dominant paradigm are capable of solving currently.
Aftermath of the scientific revolution – In the long run, the new paradigm becomes institutionalized as the dominant one. Textbooks are written, obscuring the revolutionary process.
Features
Paradigm shifts and progress
A common misinterpretation of paradigms is the belief that the discovery of paradigm shifts and the dynamic nature of science (with its many opportunities for subjective judgments by scientists) are a case for relativism: the view that all kinds of belief systems are equal. Kuhn vehemently denies this interpretation and states that when a scientific paradigm is replaced by a new one, albeit through a complex social process, the new one is always better, not just different.
Incommensurability
These claims of relativism are, however, tied to another claim that Kuhn does at least somewhat endorse: that the language and theories of different paradigms cannot be translated into one another or rationally evaluated against one another—that they are incommensurable. This gave rise to much talk of different peoples and cultures having radically different worldviews or conceptual schemes—so different that whether or not one was better, they could not be understood by one another. However, the philosopher Donald Davidson published the highly regarded essay "On the Very Idea of a Conceptual Scheme" in 1974 arguing that the notion that any languages or theories could be incommensurable with one another was itself incoherent. If this is correct, Kuhn's claims must be taken in a weaker sense than they often are. Furthermore, the hold of the Kuhnian analysis on social science has long been tenuous, with the wide application of multi-paradigmatic approaches in order to understand complex human behaviour.
Gradualism vs. sudden change
Paradigm shifts tend to be most dramatic in sciences that appear to be stable and mature, as in physics at the end of the 19th century. At that time, physics seemed to be a discipline filling in the last few details of a largely worked-out system.
In The Structure of Scientific Revolutions, Kuhn wrote, "Successive transition from one paradigm to another via revolution is the usual developmental pattern of mature science" (p. 12). Kuhn's idea was itself revolutionary in its time as it caused a major change in the way that academics talk about science. Thus, it could be argued that it caused or was itself part of a "paradigm shift" in the history and sociology of science. However, Kuhn would not recognise such a paradigm shift. In the social sciences, people can still use earlier ideas to discuss the history of science.
Philosophers and historians of science, including Kuhn himself, ultimately accepted a modified version of Kuhn's model, which synthesizes his original view with the gradualist model that preceded it.
Examples
Natural sciences
Some of the "classical cases" of Kuhnian paradigm shifts in science are:
1543 – The transition in cosmology from a Ptolemaic cosmology to a Copernican one.
1543 – The acceptance of the work of Andreas Vesalius, whose work De humani corporis fabrica corrected the numerous errors in the previously held system of human anatomy created by Galen.
1687 – The transition in mechanics from Aristotelian mechanics to classical mechanics.
1783 – The acceptance of Lavoisier's theory of chemical reactions and combustion in place of phlogiston theory, known as the chemical revolution.
The transition in optics from geometrical optics to physical optics with Augustin-Jean Fresnel's wave theory.
1826 – The discovery of hyperbolic geometry.
1830 to 1833 – Geologist Charles Lyell published Principles of Geology, which not only put forth the concept of uniformitarianism, which was in direct contrast to the popular geological theory, at the time, catastrophism, but also utilized geological proof to determine that the age of the Earth was older than 6,000 years, which was previously held to be true.
1859 – The revolution in evolution from goal-directed change to Charles Darwin's natural selection.
1880 – The germ theory of disease began overtaking Galen's miasma theory.
1905 – The development of quantum mechanics, which replaced classical mechanics at microscopic scales.
1887 to 1905 – The transition from the luminiferous aether present in space to electromagnetic radiation in spacetime.
1919 – The transition between the worldview of Newtonian gravity and general relativity.
1920 – The emergence of the modern view of the Milky Way as just one of countless galaxies within an immeasurably vast universe following the results of the Smithsonian's Great Debate between astronomers Harlow Shapley and Heber Curtis.
1952 – Chemists Stanley Miller and Harold Urey perform an experiment which simulated the conditions on the early Earth that favored chemical reactions that synthesized more complex organic compounds from simpler inorganic precursors, kickstarting decades of research into the chemical origins of life.
1964 – The discovery of cosmic microwave background radiation leads to the big bang theory being accepted over the steady state theory in cosmology.
1965 – The acceptance of plate tectonics as the explanation for large-scale geologic changes.
1969 – Astronomer Victor Safronov, in his book Evolution of the protoplanetary cloud and formation of the Earth and the planets, developed the early version of the current accepted theory of planetary formation.
1974 – The November Revolution, with the discovery of the J/psi meson, and the acceptance of the existence of quarks and the Standard Model of particle physics.
1960 to 1985 – The acceptance of the ubiquity of nonlinear dynamical systems as promoted by chaos theory, instead of a laplacian world-view of deterministic predictability.
Social sciences
In Kuhn's view, the existence of a single reigning paradigm is characteristic of the natural sciences, while philosophy and much of social science were characterized by a "tradition of claims, counterclaims, and debates over fundamentals." Others have applied Kuhn's concept of paradigm shift to the social sciences.
The movement known as the cognitive revolution moved away from behaviourist approaches to psychology and the acceptance of cognition as central to studying human behavior.
Anthropologist Franz Boas published The Mind of Primitive Man, which integrated his theories concerning the history and development of cultures and established a program that would dominate American anthropology in the following years. His research, along with that of his other colleagues, combatted and debunked the claims being made by scholars at the time, given scientific racism and eugenics were dominant in many universities and institutions that were dedicated to studying humans and society. Eventually anthropology would apply a holistic approach, utilizing four subcategories to study humans: archaeology, cultural, evolutionary, and linguistic anthropology.
At the turn of the 20th century, sociologists, along with other social scientists developed and adopted methodological antipositivism, which sought to uphold a subjective perspective when studying human activities pertaining to culture, society, and behavior. This was in stark contrast to positivism, which took its influence from the methodologies utilized within the natural sciences.
First proposed by Ferdinand de Saussure in 1879, the laryngeal theory in Indo-European linguistics postulated the existence of "laryngeal" consonants in the Proto-Indo-European language (PIE), a theory that was confirmed by the discovery of the Hittite language in the early 20th century. The theory has since been accepted by the vast majority of linguists, paving the way for the internal reconstruction of the syntax and grammatical rules of PIE and is considered one of the most significant developments in linguistics since the initial discovery of the Indo-European language family.
The adoption of radiocarbon dating by archaeologists has been proposed as a paradigm shift because of how it greatly increased the time depth the archaeologists could reliably date objects from. Similarly the use of LIDAR for remote geospatial imaging of cultural landscapes, and the shift from processual to post-processual archaeology have both been claimed as paradigm shifts by archaeologists.
The emergence of three-phase traffic theory created by Boris Kerner in vehicular traffic science as an alternative theory to classical (standard) traffic flow theories.
Applied sciences
More recently, paradigm shifts are also recognisable in applied sciences:
In medicine, the transition from "clinical judgment" to evidence-based medicine.
In Artificial Intelligence, the transition from a knowledge-based to a data-driven paradigm has been discussed from 2010.
Other uses
The term "paradigm shift" has found uses in other contexts, representing the notion of a major change in a certain thought pattern—a radical change in personal beliefs, complex systems or organizations, replacing the former way of thinking or organizing with a radically different way of thinking or organizing:
M. L. Handa, a professor of sociology in education at O.I.S.E. University of Toronto, Canada, developed the concept of a paradigm within the context of social sciences. He defines what he means by "paradigm" and introduces the idea of a "social paradigm". In addition, he identifies the basic component of any social paradigm. Like Kuhn, he addresses the issue of changing paradigms, the process popularly known as "paradigm shift". In this respect, he focuses on the social circumstances that precipitate such a shift. Relatedly, he addresses how that shift affects social institutions, including the institution of education.
The concept has been developed for technology and economics in the identification of new techno-economic paradigms as changes in technological systems that have a major influence on the behaviour of the entire economy (Carlota Perez; earlier work only on technological paradigms by Giovanni Dosi). This concept is linked to Joseph Schumpeter's idea of creative destruction. Examples include the move to mass production and the introduction of microelectronics.
Two photographs of the Earth from space, "Earthrise" (1968) and "The Blue Marble" (1972), are thought to have helped to usher in the environmentalist movement, which gained great prominence in the years immediately following distribution of those images.
Hans Küng applies Thomas Kuhn's theory of paradigm change to the entire history of Christian thought and theology. He identifies six historical "macromodels": 1) the apocalyptic paradigm of primitive Christianity, 2) the Hellenistic paradigm of the patristic period, 3) the medieval Roman Catholic paradigm, 4) the Protestant (Reformation) paradigm, 5) the modern Enlightenment paradigm, and 6) the emerging ecumenical paradigm. He also discusses five analogies between natural science and theology in relation to paradigm shifts. Küng addresses paradigm change in his books, Paradigm Change in Theology and Theology for the Third Millennium: An Ecumenical View.
In the later part of the 1990s, 'paradigm shift' emerged as a buzzword, popularized as marketing speak and appearing more frequently in print and publication. In his book Mind The Gaffe, author Larry Trask advises readers to refrain from using it, and to use caution when reading anything that contains the phrase. It is referred to in several articles and books as abused and overused to the point of becoming meaningless.
The concept of technological paradigms has been advanced, particularly by Giovanni Dosi.
Criticism
In a 2015 retrospective on Kuhn, the philosopher Martin Cohen describes the notion of the paradigm shift as a kind of intellectual virus – spreading from hard science to social science and on to the arts and even everyday political rhetoric today. Cohen claims that Kuhn had only a very hazy idea of what it might mean and, in line with the Austrian philosopher of science Paul Feyerabend, accuses Kuhn of retreating from the more radical implications of his theory, which are that scientific facts are never really more than opinions whose popularity is transitory and far from conclusive. Cohen says scientific knowledge is less certain than it is usually portrayed, and that science and knowledge generally is not the 'very sensible and reassuringly solid sort of affair' that Kuhn describes, in which progress involves periodic paradigm shifts in which much of the old certainties are abandoned in order to open up new approaches to understanding that scientists would never have considered valid before. He argues that information cascades can distort rational, scientific debate. He has focused on health issues, including the example of highly mediatised 'pandemic' alarms, and why they have turned out eventually to be little more than scares.
See also
(author of Paradigm Shift)
References
Citations
Sources
External links
MIT 6.933J – The Structure of Engineering Revolutions. From MIT OpenCourseWare, course materials (graduate level) for a course on the history of technology through a Kuhnian lens.
Change
Cognition
Concepts in epistemology
Concepts in the philosophy of science
Consensus reality
Critical thinking
Epistemology of science
Historiography of science
Innovation
Philosophical theories
Reasoning
Scientific Revolution
Thomas Kuhn | 0.763703 | 0.997237 | 0.761593 |
Career development | Career development refers to the process an individual may undergo to evolve their occupational status. It is the process of making decisions for long term learning, to align personal needs of physical or psychological fulfillment with career advancement opportunities. Career Development can also refer to the total encompassment of an individual's work-related experiences, leading up to the occupational role they may hold within an organization.
Career development can occur on an individual basis or an organizational level.
Career development planning
On an individual basis, career planning encompasses a process in which the individual is self-aware of their personal needs and desires for fulfilment in their personal life, in conjunction with the career they hold. While every person's experiences are unique, this contributes to the different careers that people will acquire over their lifespan.
Long-term careers
Careers that are long-term commitments throughout an individual's life are referred to as steady-state careers. The person will work towards their retirement with specialized skillsets learned throughout their entire life. For example, somebody would be required to complete a steady process of graduating from medical school and then working in the medical profession until they have retired. Steady-state careers may also be referred to as holding the same occupational role in an organization for an extended period and becoming specialized in the area of expertise. For example, a retail manager who has worked in the sales industry for an extended period of their life would have the knowledge, skills, and attributes regarding managing non-managerial staff and coordinating job tasks to be fulfilled by subordinates.
A career that requires new initiatives of growth and responsibility upon accepting new roles can be referred to as linear careers, as every unique opportunity entails a more significant impact of responsibility and decision-making power on an organizational environment. A linear career path involves a vertical movement in the hierarchy of management when one is promoted. For example, a higher-level management position in a company would entail more responsibility regarding decision-making and allocation of resources to effectively and efficiently run a company. Mid-level managers and top-level managers/CEOs would be referred to as having linear careers, as their vertical movement in the organizational hierarchy would also entail more responsibilities for planning, controlling, leading, and organizing managerial tasks.
Short-term careers
When individuals take on a short term or temporary work, these are transitory careers and spiral careers. Transitory careers occur when a person undergoes frequent job changes, in which each task is not similar to the preceding one. For example, a fast-food worker who leaves the food industry after a year to work as an entry-level bookkeeper or an administrative assistant in an office setting is a Transitory Career change. The worker's skills and knowledge of their previous job role will not be relevant to their new role.
A spiral career is any series of short term jobs that are non-identical to one another, but still contribute to skill-building of a specific attribute that individuals will accept over their lifetime.
Career development perspectives: individual versus organizational needs
An individual's personal initiatives that they pursue for their career development are primarily concerned with their personal values, goals, interests, and the path required to fulfill these desires. A degree of control and sense of urgency over a personal career development path can require an individual to pursue additional education or training initiatives to align with their goals. In relation, John L. Holland's 6 career anchors categorizes people to be investigative, realistic, artistic, social, enterprising, and conventional, in which the career path will depend on the characteristic that an individual may embody.
The factors that influence an individual to make proper career goal decisions also relies on the environmental factors that are directly affecting them. Decisions are based on varying aspects affecting work-life balance, desires to align career options with their personal values, and the degree of stimulation or growth.
A corporate organization can be sufficient in providing career development opportunities through the Human Resources functions of Training and Development. The primary purpose of Training and Development is to ensure that the strategic planning of the organizational goals will remain adaptable to the demands of a changing environment. Upon recruiting and hiring employees, an organization's Human Resource department is responsible for providing clear job descriptions regarding the job tasks at hand required for the role, along with the opportunities of job rotation, transfers, and promotions. Hiring managers are responsible for ensuring that the subordinates are aware of their job tasks, and ensure the flow of communication remains efficient. In relation, managers are also responsible for nurturing and creating a favorable work environment to work in, to foster the long term learning, development, and talent acquisition of their subordinates. Consequently, the extent to which a manager embraces the delegation of training and developing their employees plays a key factor in the retention and turnover of employees.
Relative context of social identity in career planning
As the process of career planning is relational to balancing the varying factors of demands in an individual's life, socio-demographics factors relating to an individual's age, race, gender, and socio-economic status may influence the extent to which they pursue career planning or other opportunities for training and development of skills. The varying aspects of social identity in relation to the context of finding a balance to the demands in personal life will influence individuals to make decisions to change, adapt, or abandon their career path.
Both men and women for example, will make different types of decisions based on the situational factors that require balance. Women tend to make more choices to balance work and non-work priorities such as child or elder care. This may also discourage some women to pursue their career path, and focus on prioritizing assistance for others. Men will make decisions based on not only having to balance work and non-work priorities, but for advancement and added income.
See also
Employment counsellor
Global Career Development Facilitator (GCDF)
Holland Codes
Occupational Outlook Handbook
Personality psychology
Notable figures in career development
References
Business terms | 0.770537 | 0.988348 | 0.761559 |
Role theory | Role theory is a concept in sociology and in social psychology that considers most of everyday activity to be the acting-out of socially defined categories (e.g., mother, manager, teacher). Each role is a set of rights, duties, expectations, norms, and behaviors that a person has to face and fulfill. The model is based on the observation that people behave in a predictable way, and that an individual's behavior is context specific, based on social position and other factors. Research conducted on role theory mainly centers around the concepts of consensus, role conflict, role taking, and conformity. The theatre is a metaphor often used to describe role theory.
Although the word role (or roll) has existed in European languages for centuries, as a sociological concept, the term has only been around since the 1920s and 1930s. It became more prominent in sociological discourse through the theoretical works of George Herbert Mead, Jacob L. Moreno, Talcott Parsons, Ralph Linton, and Georg Simmel. Two of Mead's concepts—the mind and the self—are the precursors to role theory.
The theory posits the following propositions about social behavior:
The division of labor in society takes the form of the interaction among heterogeneous specialized positions that we call roles;
Social roles included "appropriate" and "permitted" forms of behavior, guided by social norms, which are commonly known and hence determine expectations;
Roles are occupied by individuals, or "actors";
When individuals approve of a social role (i.e., they consider the role "legitimate" and "constructive"), they will incur costs to conform to role norms, and will also incur costs to punish those who violate role norms;
Changed conditions can render a social role outdated or illegitimate, in which case social pressures are likely to lead to role change;
The anticipation of rewards and punishments, as well as the satisfaction of behaving in a prosocial way, account for why agents conform to role requirements.
In terms of differences among role theory, on one side there is a more functional perspective, which can be contrasted with the more micro-level approach of the symbolic interactionist tradition. This type of role theory dictates how closely related individuals' actions are to society, as well as how empirically testable a particular role theory perspective may be.
Depending on the general perspective of the theoretical tradition, there are many types of role theory, however, it may be divided into two major types, in particular: structural functionalism role theory and dramaturgical role theory. Structural functionalism role theory is essentially defined as everyone having a place in the social structure and every place had a corresponding role, which has an equal set of expectations and behaviors. Life is more structured, and there is a specific place for everything. In contrast, dramaturgical role theory defines life as a never-ending play, in which we are all actors. The essence of this role theory is to role-play in an acceptable manner in society.
Robert Kegan’s theory of adult development plays a role in understanding role theory. Three pivotal sections in his theory are first the socialized mind. People in this mindset, base their actions on the opinion of others. The second part is the self-authorized mind, this mindset breaks loose of others thoughts and makes their own decisions. The last part in this theory is the self-transforming mind. This mindset listens to the thoughts and opinions of others, yet still is able to choose and make the decision for themselves. Less than 1 percent of people are in the self-transforming mindset. For the socialized mind, 60 percent of people are in this mindset well into their adult years. Role theory is following perceived roles and standards that people in society normalize. People are confined to roles that have been placed around them due to the socialized mind. The internalization of the value of others in society leads to role theory.
A key insight of this theory is that role conflict occurs when a person is expected to simultaneously act out multiple roles that carry contradictory expectations. They are pulled in different ways as they strive to hold various types of societal standards and statuses.
Role
Substantial debate exists in the field over the meaning of the role in role theory. A role can be defined as a social position, behavior associated with a social position, or a typical behavior. Some theorists have put forward the idea that roles are essentially expectations about how an individual ought to behave in a given situation, whereas others consider it means how individuals actually behave in a given social position. Some have suggested that a role is a characteristic behavior or expected behavior, a part to be played, or a script for social conduct.
In sociology, there are different categories of social roles:
cultural roles: roles given by culture (e.g. priest)
social differentiation: e.g. teacher, taxi driver
situation-specific roles: e.g. eye witness
bio-sociological roles: e.g. as human in a natural system
gender roles: as a man, woman, mother, father, etc.
Role theory models behavior as patterns of behaviors to which one can conform, with this conformity being based on the expectations of others.
It has been argued that a role must in some sense being defined in relation to others. The manner and degree is debated by sociologists. Turner used the concept of an "other-role", arguing the process of defining a role is negotiating one's role with other-roles.
The construction of roles
Turner argued that the process of describing a role also modifies the role which would otherwise be implicit, referring to this process as role-making arguing that very formal roles such as those in the military are not representative of roles because the role-making process is suppressed. Sociologist Howard S. Becker similarly claims that the label given and the definition used in a social context can change actions and behaviors.
Situation-specific roles develop ad hoc in a given social situation. However it can be argued that the expectations and norms that define this ad hoc role are defined by the social role.
The word consensus is used when a group of people have the same expectations through agreement. We live in a society where people know how they should act, which is a result of learned behaviors stemming from social norms. As a whole society follows typical roles and follows their expected norms. Subsequently, there is a standard created through the conformity of these social groups.
The relationship between roles and norms
Some theorists view behavior as being enforced by social norms. Turner rather argues that there is a norm of consistency that failing to conform to a role breaks a norm because it violates consistency.
Cultural roles
Cultural roles are seen as a matter of course, and are mostly stable. In cultural changes new roles can develop and old roles can disappear – these cultural changes are affected by political and social conflicts. For example, the feminist movement initiated a change in male and female roles in Western societies. The roles, or the exact duties of men more specifically are being questioned. With more women going further in school than men comes more financial and occupational benefits. Unfortunately, these benefits have not been shown to increase women's happiness.
Social differentiation
Social differentiation received a lot of attention due to the development of different job roles. Robert K. Merton distinguished between intrapersonal and interpersonal role conflicts. For example, a foreman has to develop his own social role facing the expectations of his team members and his supervisor – this is an interpersonal role conflict. He also has to arrange his different social roles as father, husband, club member – this is an intrapersonal role conflict.
Ralph Dahrendorf distinguished between must-expectations, with sanctions; shall-expectations, with sanctions and rewards and can-expectations, with rewards. The foreman has to avoid corruption; he should satisfy his reference groups (e.g. team members and supervisors); and he can be sympathetic. He argues another component of role theory is that people accept their own roles in the society and it is not the society that imposes them.
Role behavior
In their life people have to face different social roles, sometimes they have to face different roles at the same time in different social situations. There is an evolution of social roles: some disappear and some new develop. Role behavior is influenced by:
The norms that determine a social situation.
Internal and external expectations are connected to a social role.
Social sanctions (punishment and reward) are used to influence role behavior.
These three aspects are used to evaluate one's own behavior as well as the behavior of other people. Heinrich Popitz defines social roles as norms of behavior that a special social group has to follow. Norms of behavior are a set of behaviors that have become typical among group members; in case of deviance, negative sanctions follow.
Gender roles
Gender has played a crucial role in our societal norms and the distinction between how female and male roles are viewed in society. Specifically within the workplace, and in the home. Historically there was a division of roles created by society due to gender. Gender was a social difference between female and male; whereas sex was nature. Gender became a way to categorize men and women and divide them into their societal roles. Although gender is important there are many different ways that women are categorized in society. Other ways are racially and through class experience. While we have societal roles from gender, there will always be a separation between females and males.
Throughout history, the roles of women and men have changed with time as it progresses. Men developed traits that suited them for providing such as hunting and labor. Women acquired traits centered around children and home life. As the industry grew, men used their strength to find power and as a result they proceeded to obtain the majority of jobs.
Through the distinct roles of male and female, women developed communal traits that were needed for caring and nurturing those around them. Males developed agentic traits that allowed for roles in leadership, hunting, and labor.
With the advancement of times, with jobs and the industry moving away from strength and labor, women have advanced their education for employment. The sex segregation between women and men has decreased as time has matured and evolved away from traditional gender roles in society.
In public relations
Role theory is a perspective that considers everyday activity to be acting out socially defined categories. Split into two narrower definitions: status is one's position within a social system or group; and role is one's pattern of behavior associated with a status.
Organizational role is defined as "recurring actions of an individual, appropriately interrelated with the repetitive activities of others so as to yield a predictable outcome." (Katz & Kahn, 1978). Within an organization there are three main topologies:
Two-role typology:
Manager
Technician
Four-role typology:
Expert prescriber
Communication facilitator
Problem-Solving Process Facilitator
Communication technician
Five-role typology:
Monitor and evaluator
Key policy and strategic advisor
Troubleshooter/problem solver
Issues management expert
Communication technician
Role conflict, strain, or making
Despite variations in the terms used, the central component of all of the formulations is incompatibility.
Role conflict is a conflict among the roles corresponding to two or more statuses, for example, teenagers who have to deal with pregnancy (statuses: teenager, mother). Role conflict is said to exist when there are important differences among the ratings given for various expectations. By comparing the extent of agreement or disagreement among the ranks, a measure of role conflict was obtained.
Role strain or "role pressure" may arise when there is a conflict in the demands of roles, when an individual does not agree with the assessment of others concerning his or her performance in his or her role, or from accepting roles that are beyond an individual's capacity.
Role making is defined by Graen as leader–member exchange.
At the same time, a person may have limited power to negotiate away from accepting roles that cause strain, because he or she is constrained by societal norms, or has limited social status from which to bargain.
Criticism and limitations
Role theorists have noted that a weakness of role theory is in describing and explaining deviant behavior.
Role theory has been criticized for reinforcing commonly held prejudices about how people should behave; have ways they should portray themselves as well as how others should behave, view the individual as responsible for fulfilling the expectations of a role rather than others responsible for creating a role that they can perform, and people have argued that role theory insufficiently explains power relations, as in some situations an individual does not consensually fulfill a role but is forced into behaviors by power.
It is also argued that role theory does not explain individual agency in negotiating their role and that role theory artificially merges roles when in practice an individual might combine roles together.
Others have argued that the concept of role takes on such a broad definition as to be meaningless.
See also
Behaviorism
Conformity
Deviance (sociology)
Dramaturgical perspective
Game studies
Generalized other
Hedonism
Role engulfment
Role model
Role suction
Transactional analysis
Notes
References
Bibliography
Robert K. Merton, Social Theory and Social Structure, 1949
Ralf Dahrendorf, Homo sociologicus, 1958 (in German, many editions)
Rose Laub Coser, "The Complexity of Roles as a Seedbed of Individual Autonomy", in: The Idea of Social Structure: Papers in Honor of Robert K. Merton, 1975
Ralph Linton, The Study of Man'', Chapter 8, "Status and Role", 1936
External links | 0.769714 | 0.989399 | 0.761554 |
Divergent thinking | Divergent thinking is a thought process used to generate creative ideas by exploring many possible solutions. It typically occurs in a spontaneous, free-flowing, "non-linear" manner, such that many ideas are generated in an emergent cognitive fashion. Many possible solutions are explored in a short amount of time, and unexpected connections are drawn. Following divergent thinking, ideas and information are organized and structured using convergent thinking, which follows a particular set of logical steps to arrive at one solution, which in some cases is a "correct" solution.
The psychologist J.P. Guilford first coined the terms convergent thinking and divergent thinking in 1956.
Activities
Activities which promote divergent thinking include creating lists of questions, setting aside time for thinking and meditation, brainstorming, subject mapping, bubble mapping, keeping a journal, playing tabletop role-playing games, creating artwork, and free writing. In free writing, a person will focus on one particular topic and write non-stop about it for a short period of time, in a stream of consciousness fashion.
Playfulness
Parallels have been drawn between playfulness in kindergarten-aged children and divergent thinking. In a study documented by Lieberman, the relationship between these two traits was examined, with playfulness being "conceptualized and operationally defined in terms of five traits: physical, social and cognitive spontaneity; manifest joy; and sense of humour". The author noted that during the study, while observing the children's behaviour at play, they "noted individual differences in spontaneity, overtones of joy, and sense of humour that imply a relationship between the foregoing qualities and some of the factors found in the intellectual structure of creative adults and adolescents". This study highlighted the link between behaviours of divergent thinking, or creativity, in playfulness during childhood and those displayed in later years, in creative adolescents and adults.
Future research opportunities in this area could explore a longitudinal study of kindergarten-aged children and the development or evolution of divergent thinking abilities throughout adolescence, into adulthood, in order to substantiate the link drawn between playfulness and divergent thinking in later life. This long-term study would help parents and teachers identify this behaviour (or lack thereof) in children, specifically at an age when it can be reinforced if already displayed, or supported if not yet displayed.
Divergent Thinking and Mental Health
Certain divergent thinking patterns have been associated with mental health disorders, while divergent thinking as a practice may have therapeutic benefits.
Divergent thinking and psychopathology
Divergent thinking can be counterproductive when used excessively. Extreme divergent thinkers end up in loop of endless possibilities without making a decision. Schizophrenia is a variation of extreme divergent thinking, exhibiting actions and thoughts not yielding creativity. Some well-known artists and writers display extreme thinking traits including impulsive nonconformity and over-inclusive thinking.
Therapeutic value of divergent thinking
The ability to use divergent thinking is said to increase the mental status of young adults according to Bennliure and Moral. Mental health can have major impacts on peoples lives. It can be beneficial to some people to learn more about divergent thinking and how it can help with coping mechanisms. Bennliure and Moral state that people with low divergent thinking can get overwhelmed by thinking of the same "repetitive" answer or thought process, leading to feelings of anxiety or depression. On the other hand, being able to create multiple ideas, answers, or plans of action for a certain stressor can create less "thoughts of helplessness, catastrophism, and hopelessness." For this reason, being able to use divergent thinking can be beneficial in lessening anxiety and depression symptoms by "having a more active and open approach" to problems or stressors.
Deductive reasoning
Divergent thinking not only encourages playfulness but reasoning skills as well. Pier-Luc Chantal, Emilie Gagnon-St-Pierre, and Henry Markovits of Universite du Quebec a Montreal conducted a study on preschool aged children in which the relationship between divergent thinking and deductive reasoning were observed. They found that incorporating components of divergent thinking into learning, such as generating unique ideas, "might be a powerful tool to improve reasoning." This approach stresses the idea that "deductive reasoning is not only about getting the 'right' answer but requires going beyond the most obvious ideas in order to generate even very unlikely possibilities."
Divergent thinking and aging
Guila Fusi, Sara Lavolpe, Nara Crepaldi, and Maria Lusia Rusconi conducted a systematic review on the effect of age on divergent thinking. They found that the relationship between age and DT abilities is not at all linear, but "complex and multidimensional." Many variables can influence DT abilities, including "educational level, intelligence, WM (working memory) abilities, and speed of processing." Before any further research should be done, the authors first believe that a theoretical discussion needs to be held. Of course, "new and more accurate information about which of the DT abilities might be preserved or impaired in the elderly population could have significant practical implications."
Effects of positive and negative mood
In a study at the University of Bergen, Norway, the effects of positive and negative mood on divergent-thinking were examined. Nearly two hundred art and psychology students participated, first by measuring their moods with an adjective checklist before performing the required tasks. The results showed a clear distinction in performance between those with a self-reported positive versus negative mood:
A series of related studies suggested a link between positive mood and the promotion of cognitive flexibility. In a 1990 study by Murray, Sujan, Hirt and Sujan, this hypothesis was examined more closely and "found positive mood participants were able to see relations between concepts”, as well as demonstrating advanced abilities "in distinguishing the differences between concepts". This group of researchers drew a parallel between "their findings and creative problem solving by arguing that participants in a positive mood are better able both to differentiate between and to integrate unusual and diverse information". This shows that their subjects are at a distinct cognitive advantage when performing divergent thinking-related tasks in an elevated mood. Further research could take this topic one step further to explore effective strategies to improve divergent thinking when in a negative mood, for example how to move beyond "optimizing strategies" into "satisficing strategies" rather than focus on "the quality of their ideas", in order to generate more ideas and creative solutions.
Effects of sleep deprivation
While little research has been conducted on the impact of sleep deprivation on divergent thinking, one study by J.A. Horne illustrated that even when motivation to perform well is maintained, sleep can still impact divergent thinking performance. In this study, twelve subjects were deprived of sleep for thirty-two hours, while a control group of twelve others maintained normal sleep routine. Subjects' performance on both a word fluency task and a challenging nonverbal planning test was "significantly impaired by sleep loss", even when the factor of personal motivation to perform well was controlled. This study showed that even "one night of sleep loss can affect divergent thinking”, which "contrasts with the outcome for convergent thinking tasks, which are more resilient to short-term sleep loss". Research on sleep deprivation and divergent thinking could be further explored on a biological or chemical level, to identify the reason why cognitive functioning, as it relates to divergent thinking, is impacted by lack of sleep and if there is a difference in its impact if subjects are deprived of REM versus non-REM sleep.
Divergent thinking modeling
Both convergent and divergent processing have been subject to modeling. The first process has been modeled by emulating responses to the Remote Associates Test (RAT) by Olteţeanu and Falomir (2015) and Klein and Badia (2015). The RAT was modeled by both research teams as a proof-of-concept to investigate how remote associative concepts relate to statistically based Natural Language Processing techniques and how these connections relate to the convergent and divergent cognitive processes involved in creativity. According to Klein and Badia, distant associates are tracked down and chosen using a strictly lexical-based modeling technique, where both the frequency of co-occurrence and the frequency of each term in the corpus are valued in the convergent and divergent parts of the process.
On a more divergent focus, Klein and Badia (2022), and Olteţeanu and Falomir (2016) proposed a divergent thinking emulation by modeling the Alternative Uses Task (AUT). The former researchers proposed a simple co-occurrence based method with and without grammatical labeling to solve this test. The later applied what they named Object Replacement and Object Composition with specific reference to AUT. Other ideas for DT generation, include Veale and Li (2016) template approach, and López-Ortega (2013) who proposed an application of divergent exploration in a multi agent system.
See also
References
External links
Changing Education Paradigms by RSA Animate on YouTube
Fuel Creativity in the Classroom With Divergent Thinking on Edutopia
What Type of Thinker Are You? at Psychology Today
How Generative AI Can Augment Human Creativity: Use it to promote divergent thinking at Harvard Business Review
Problem solving skills | 0.767486 | 0.992263 | 0.761548 |
Postpositivism | Postpositivism or postempiricism is a metatheoretical stance that critiques and amends positivism and has impacted theories and practices across philosophy, social sciences, and various models of scientific inquiry. While positivists emphasize independence between the researcher and the researched person (or object), postpositivists argue that theories, hypotheses, background knowledge and values of the researcher can influence what is observed. Postpositivists pursue objectivity by recognizing the possible effects of biases. While positivists emphasize quantitative methods, postpositivists consider both quantitative and qualitative methods to be valid approaches.
Philosophy
Epistemology
Postpositivists believe that human knowledge is based not on a priori assessments from an objective individual, but rather upon human conjectures. As human knowledge is thus unavoidably conjectural, the assertion of these conjectures are warranted, or more specifically, justified by a set of warrants, which can be modified or withdrawn in the light of further investigation. However, postpositivism is not a form of relativism, and generally retains the idea of objective truth.
Ontology
Postpositivists believe that a reality exists, but, unlike positivists, they believe reality can be known only imperfectly. Postpositivists also draw from social constructionism in forming their understanding and definition of reality.
Axiology
While positivists believe that research is or can be value-free or value-neutral, postpositivists take the position that bias is undesired but inevitable, and therefore the investigator must work to detect and try to correct it. Postpositivists work to understand how their axiology (i.e. values and beliefs) may have influenced their research, including through their choice of measures, populations, questions, and definitions, as well as through their interpretation and analysis of their work.
History
Historians identify two types of positivism: classical positivism, an empirical tradition first described by Henri de Saint-Simon and Auguste Comte in the first half of the 19th century, and logical positivism, which is most strongly associated with the Vienna Circle, which met near Vienna, Austria, in the 1920s and 1930s. Postpositivism is the name D.C. Phillips gave to a group of critiques and amendments which apply to both forms of positivism.
One of the first thinkers to criticize logical positivism was Karl Popper. He advanced falsification in lieu of the logical positivist idea of verificationism. Falsificationism argues that it is impossible to verify that beliefs about universals or unobservables are true, though it is possible to reject false beliefs if they are phrased in a way amenable to falsification.
In 1965, Karl Popper and Thomas Kuhn had a debate as Thomas Kuhn's theory did not incorporate this idea of falsification. It has influenced contemporary research methodologies.
Thomas Kuhn is credited with having popularized and at least in part originated the post-empiricist philosophy of science. Kuhn's idea of paradigm shifts offers a broader critique of logical positivism, arguing that it is not simply individual theories but whole worldviews that must occasionally shift in response to evidence.
Postpositivism is not a rejection of the scientific method, but rather a reformation of positivism to meet these critiques. It reintroduces the basic assumptions of positivism: the possibility and desirability of objective truth, and the use of experimental methodology. The work of philosophers Nancy Cartwright and Ian Hacking are representative of these ideas. Postpositivism of this type is described in social science guides to research methods.
Structure of a postpositivist theory
Robert Dubin describes the basic components of a postpositivist theory as being composed of basic "units" or ideas and topics of interest, "laws of interactions" among the units, and a description of the "boundaries" for the theory. A postpositivist theory also includes "empirical indicators" to connect the theory to observable phenomena, and hypotheses that are testable using the scientific method.
According to Thomas Kuhn, a postpositivist theory can be assessed on the basis of whether it is "accurate", "consistent", "has broad scope", "parsimonious", and "fruitful".
Main publications
Karl Popper (1934) Logik der Forschung, rewritten in English as The Logic of Scientific Discovery (1959)
Thomas Kuhn (1962) The Structure of Scientific Revolutions
Karl Popper (1963) Conjectures and Refutations
Ian Hacking (1983) Representing and Intervening
Andrew Pickering (1984) Constructing Quarks
Peter Galison (1987) How Experiments End
Nancy Cartwright (1989) Nature's Capacities and Their Measurement
See also
Antipositivism
Philosophy of science
Scientism
Sociology of scientific knowledge
Notes
References
Alexander, J.C. (1995), Fin De Siecle Social Theory: Relativism, Reductionism and The Problem of Reason, London; Verso.
Phillips, D.C. & Nicholas C. Burbules (2000): Postpositivism and Educational Research. Lanham & Boulder: Rowman & Littlefield Publishers.
Zammito, John H. (2004): A Nice Derangement of Epistemes. Post-positivism in the study of Science from Quine to Latour. Chicago & London: The University of Chicago Press.
Popper, K. (1963), Conjectures and Refutations: The Growth of Scientific Knowledge, London; Routledge.
Moore, R. (2009), Towards the Sociology of Truth, London; Continuum.
External links
Positivism and Post-positivism
Positivism
Metatheory of science
Epistemological theories | 0.768501 | 0.990931 | 0.761531 |
OGSM | Objectives, goals, strategies and measures (OGSM) is a goal setting and action plan framework used in strategic planning. It is used by organizations, departments, teams and sometimes program managers to define and track measurable goals and actions to achieve an objective. Documenting your goals, strategies and actions all on one page gives insights that can be missing with other frameworks. It defines the measures that will be followed to ensure that goals are met and helps groups work together toward common objectives, across functions, geographical distance and throughout the organization. OGSM's origins can be traced back to Japan in the 1950s, stemming from the process and strategy work developed during the occupation of Japan in the post-World War II period. It has since been adopted by many Fortune 500 companies. In particular, Procter & Gamble uses the process to align the direction of their multinational corporation around the globe.
Purpose
The OGSM framework forms the basis for strategic planning and execution, as well as a strong management routine that keep the plan part of the day-to-day operations. It aligns the leaders to the objective of the company, links key strategies to the financial goals, and brings visibility and accountability to the work of improving the capabilities of the company. Due to the concise format (usually one page) and simple color-coding to signal progress, OGSM allows for quick management by exception of any underperforming activity or underperforming (key) performance indicator. And finally, it is simple, robust and developed as a team.
OGSM is designed to identify strategic priorities, capture market opportunities, optimize resources, enhance speed and execution, and align team members.
History
Research indicates that the OGSM method was developed by Procter & Gamble, but the verifiable origins of OGSM remain unclear.
Brought from Japan to corporate America in the 1950s, the OGSM concept was used initially by car manufacturers. Today, larger corporations, including Fortune 500 companies, employ this framework to keep their workforces centered on goals and objectives. Ideally, this tool attempts to express in one page what a traditional business plan takes 50 pages to explain.
Development in the U.S.
The OGSM has been employed by multinational corporations around the globe, including but not limited to:
Coca-Cola Company
Procter & Gamble
KPN
Royal FloraHolland
Reckitt Benckiser
Honda
Mars
MetLife
Triumph International
Intersnack cashew PVT LTD
Del Monte Foods
Procter & Gamble (P&G) provides an example of how these ideas translate into organizational practice. A.G. Lafley, the CEO of P&G, uses the OGSM tool (as illustrated in ) to provide a framework for organizing the discussion about goals and strategic direction. While notably implemented at fortune 500 companies, startups and SMBs also use OGSMs to create strategic alignment.
See also
SMART criteria
References
Management frameworks
Organizational cybernetics | 0.77504 | 0.98249 | 0.761469 |
Competency-based learning | Competency-based learning or competency-based education is a framework for teaching and assessment of learning. It is also described as a type of education based on predetermined "competencies," which focuses on outcomes and real-world performance. Competency-based learning is sometimes presented as an alternative to traditional methods of assessment in education.
Concept
In a competency-based education framework, students demonstrate their learned knowledge and skills in order to achieve specific predetermined "competencies." The set of competencies for a specific course or at a specific educational institution is sometimes referred to as the competency architecture. Students are generally assessed in various competencies at various points during a course, and usually have the opportunity to attempt a given competency multiple times and receive continuous feedback from instructors.
Key concepts that make up the competency-based education framework include demonstrated mastery of a competency, meaningful types of assessment, individualized support for students, and the creation and application of knowledge.
Methodology
In a competency-based learning model, the instructor is required to identify specific learning outcomes in terms of behavior and performance, including the appropriate criterion level to be used in evaluating achievement. Experiential learning is also an underpinning concept; competency-based learning is learnerfocused and often learner-directed.
The methodology of competency-based learning recognizes that learners tend to find some individual skills or competencies more difficult than others. For this reason, the learning process generally allows different students to move at varying paces within a course. Additionally, where many traditional learning methods use summative testing, competency-based learning focuses on student mastery of individual learning outcomes. Students and instructors can dynamically revise instruction strategies and based on student performance in specific competencies.
What it means to have mastered a competency depends on the subject matter and instructor criteria. In abstract learning, such as algebra, the learner may only have to demonstrate that they can identify an appropriate formula with some degree of reliability; in a subject matter that could affect safety, such as operating a vehicle, an instructor may require a more thorough demonstration of mastery.
Usage
Western Governors University has used a competency-based model of education since it was chartered in 1996.
The Mastery Transcript Consortium is a group of public and private secondary schools which are working to utilize competency-based learning as part of their effort to create a new type of secondary school transcript.
See also
References
Further reading
Educational practices | 0.77172 | 0.986686 | 0.761445 |
Human services | Human services is an interdisciplinary field of study with the objective of meeting human needs through an applied knowledge base, focusing on prevention as well as remediation of problems, and maintaining a commitment to improving the overall quality of life of service populations The process involves the study of social technologies (practice methods, models, and theories), service technologies (programs, organizations, and systems), and scientific innovations designed to ameliorate problems and enhance the quality of life of individuals, families and communities to improve the delivery of service with better coordination, accessibility and accountability. The mission of human services is to promote a practice that involves simultaneously working at all levels of society (whole-person approach) in the process of promoting the autonomy of individuals or groups, making informal or formal human services systems more efficient and effective, and advocating for positive social change within society.
Human services practitioners strive to advance the autonomy of service users through civic engagement, education, health promotion and social change at all levels of society. Practitioners also engage in advocating so human systems remain accessible, integrated, efficient and effective.
Human services academic programs can be easily accessible in colleges and universities, which award degrees at the associate, baccalaureate, and graduate levels. Human services programs are in countries all around the world.
History
United States
Human services has its roots in charitable activities of religious and civic organizations that date back to the Colonial period. However, the academic discipline of human services did not start until the 1960s. At that time, a group of college academics started the new human services movement and began to promote the adoption of a new ideology about human service delivery and professionalism among traditional helping disciplines. The movement's major goal was to make service delivery more efficient, effective, and humane. The other goals dealt with the reeducation of traditional helping professionals to have a greater appreciation of the individual as a whole person (humanistic psychology) and to be accountable to the communities they serve (postmodernism). Furthermore, professionals would learn to take responsibility at all levels of government, use systems approaches to consider human problems, and be involved in progressive social change.
Traditional academic programs such as education, nursing, social work, law and medicine were resistant to the new human services movement's ideology because it appeared to challenge their professional status. Changing the traditional concept of professionalism involved rethinking consumer control and the distribution of power. The new movement also called on human service professionals to work for social change. It was proposed that reducing monopolistic control on professionals could result in democratization of knowledge, thus leading to said professionals counteracting dominant establishments and advocating on behalf of their clients and communities. The movement also hoped that human service delivery systems would become integrated, comprehensive, and more accessible, which would make them more humane for service users. Ultimately, the resistance from traditional helping professions served as the impetus for a group of educators in higher education to start the new academic discipline of human services.
Some maintain that the human services discipline has a concrete identity as a profession that supplements and complements other traditional professions. Yet other professionals and scholars have not agreed upon an authoritative definition for human services.
Academic programs
United States
Development
Chenault and Burnford argued that human services programs must inform and train students at the graduate or postgraduate level if human services hoped to be considered a professional discipline. A progressive graduate human services program was established by Audrey Cohen (1931–1996), who was considered an innovative educator for her time. The Audrey Cohen College of Human Services, now called the Metropolitan College of New York, offered one of the first graduate programs in 1974. In the same time period, Springfield College in Massachusetts became a major force in preserving human services as an academic discipline. Currently, Springfield College is one of the oldest and largest human services program in the United States.
Manpower studies in the 1960s and 70s had shown that there would be a shortage of helping professionals in an array of service delivery areas. In turn, some educators proposed that the training of nonprofessionals (e.g., mental health technicians) could bridge this looming personnel shortage. One of the earliest educational initiatives to develop undergraduate curricula was undertaken by the Southern Regional Education Board (SREB), which was funded by the National Institute on Health. Professionals of the SREB Undergraduate Social Welfare Manpower Project helped colleges develop new social welfare programs, which later became known as human services. Some believed community college human services programs were the most expedient way to train paraprofessionals for direct service jobs in areas such as mental health. Currently, a large percentage of human services programs are run at the community college level.
The development of community college human services programs was supported with government funding that was earmarked for the federal new careers initiatives. In turn, the federally funded New Careers Program was created to produce a nonprofessional career track for economically disadvantaged, underemployed, and unemployed adults as a strategy to eradicate poverty within society and to end a critical shortage of health-care personnel. Graduates from these programs successfully acquired employment as paraprofessionals, but there were limitations to their upward mobility within social service agencies because they lacked a graduate or professional degree.
Current programs
Currently, there are academic programs in human services at the associate, baccalaureate, and graduate levels. There are approximately 600 human services programs throughout the United States. An online directory of human services programs lists many (but not all) of the programs state y state in conjunction with their accreditation status from the Council for Standards in Human Services Education (CSHSE).
The CSHSE offers accreditation for human services programs in higher education. The accreditation process is voluntary and labor-intensive; it is designed to assure the quality, consistency, and relevance of human service education through research-based standards and a peer-review process. According to the CSHSE's webpage there are only 43 accredited human services programs in the United States.
Human services curricula are based on an interdisciplinary knowledge foundation that allows students to consider practical solutions from multiple disciplinary perspectives. Across the curriculum human services students are often taught to view human problems from a socioecological perspective (developed by Urie Bronfenbrenner) that involves viewing human strengths and problems as interconnected to a family unit, community, and society. This perspective is considered a "whole-person perspective". Overall, undergraduate programs prepare students to be human services generalists while master's programs prepare students to be human services administrators, and doctoral programs prepare students to be researcher-analysts and college-level educators. Research in this field focuses on an array of topics that deal with direct service issues, case management, organizational change, management of human service organizations, advocacy, community organizing, community development, social welfare policy, service integration, multiculturalism, integration of technology, poverty issues, social justice, development, and social change strategies.
Certification and continuing education
United States
The Center for Credentialing & Education (CCE) conceptualized the Human Services-Board Certified Practitioner (HS-BCP) credential with the assistance of the National Organization for Human Services (NOHS) and the Council for Standards in Human Service Education (CSHSE). The credential was created for human services practitioners seeking to advance their careers by acquiring independent verification of their practical knowledge and educational background.
Graduates from human services programs can obtain a Human Services Board Certified Practitioner (HS-BCP) credential offered by the Center for Credentialing & Education (CCE). The HS-BCP certification ensures that human services practitioners offer quality services, are competent service providers, are committed to high standards, and adhere to the NOHS Ethical Standards of Human Service Professionals, as well as to help solidify the professional identity of human services practitioners. HS-BCPE Experience Requirements for the certification: HS-BCP applicants must meet post-graduation experience requirements to be eligible to take the examination. However, graduates of a CSHSE accredited degree program may sit for the HS-BCP exam without verifying their human services work experience. Otherwise experience requirements for candidates not from a CSHSE accredited program are as follows: Associate degree with post degree experience requires three years, including a minimum of 4,500 hours; Bachelor's Degree with post degree experience requires two years, including a minimum of 3,000 hours; Master's or Doctorate with post degree experience requires one year, including a minimum of 1,500 hours.
The HS-BCP exam is designed to verify a candidate's human services knowledge. The exam was created as a collaborative effort of human services subject-matter experts and normed on a population of professionals in the field. The HS-BCP exam covers the following areas:
Assessment, treatment planning, and outcome evaluation
Theoretical orientation/interventions
Case management, professional practice, and ethics
Administration, program development/evaluation, and supervision
Tools and methodology
There are numerous different tools and methods utilized in human services. For example, qualitative and quantitative surveys are administered to define community problems that need addressing. These surveys can narrow down what service is needed, who would receive it, for how long, and where the problem is concentrated. Additional necessary skills include strong communication and professional coordination- since networking is crucial for obtaining and transporting resources to areas of need. Lack of these skills could lead to dangerous consequences as a communities needs are not adequately met. Furthermore, research is a key component to the successful conduct of human service. Both theoretical and empirical research is required if one is to pursue a career in human services because being uninformed can leave communities in confusion and disarray- thus perpetuating the problem that was supposed to be resolved. In relation to social work, a professional must be unbiased and patient because they will be closely working with a vast and diverse population who are often in extremely dire situations. Allowing one's personal beliefs to bleed into their human service profession could negatively impact the quality of and or limit the scope of potential outreach.
Employment outlook
United States
Currently, the three major employment roles played by human services graduates include providing direct service, performing administrative work, and working in the community. According to the Occupational Outlook Handbook, published by the US Department of Labor, the employment of human service assistants is anticipated to grow by 34% through 2016, which is faster than average for all occupations. There are several different occupations for individuals with post-secondary degrees. Specialization is crucial when applying for a human service career because many different job occupations and skills fall under the broad scope of human services, especially if said job is related to social work. This is because many different types of people require different types of aid. For example, a child would need special attention compared to an adult- and would visit a professional who has trained directly with younger people. Furthermore, an alcoholic or addict would specifically need a professional rehabilitation counselor. On the other hand, a victim of a natural disaster would need a crisis support worker for immediate assistance. Other examples of human service jobs include but are not limited to; criminology, community service, housing, health, therapy, and sociology.
Professional organizations
North America
There are several different professional human services organizations for professionals, educators, and students to join across North America.
United States
The National Organization for Human Services (NOHS) is a professional organization open to educators, professionals, and students interested in current issues in the field of human services. NOHS sponsors an annual conference in different parts of the United States. In addition, there are four independent human services regional organizations: (a) Mid-Atlantic Consortium for Human Services, (b) Midwest Organization for Human Services, (c) New England Organization for Human Service, and the (d) Northwest Human Services Association. All the regional organizations are also open to educators, professionals, students and each regional organization has an annual conference in different locations throughout their region such as universities or institutions.
Human services special interest groups also exist within the American Society for Public Administration (ASPA) and the American Educational Research Association (AERA). The ASPA subsection is named the Section on Health and Human Services Administration and its purpose is to foster the development of knowledge, understanding and practice in the fields of health and human services administration and to foster professional growth and communication among academics and practitioners in these fields. Fields of health and human services administration share a common and unique focus on improving the quality of life through client-centered policies and service transactions.
The AERA special interest group is named the Education, Health and Human Service Linkages. Its purpose is to create a community of researchers and practitioners interested in developing knowledge about comprehensive school health, school linked services, and initiatives that support children and their families. This subgroup also focuses on interpersonal collaboration, integration of services, and interdisciplinary approaches. The group's interests encompass interrelated policy, practice, and research that challenge efforts to create viable linkages among these three distinct areas.
The American Public Human Services Association (APHSA) is a nonprofit organization that pursues distinction in health and human services by working with policymakers, supporting state and local agencies, and working with partners to promote innovative, integrative and efficient solutions in health and human services policy and practice. APHSA has individual and student memberships.
Canada
The Canadian Institute for Human Services is an advocacy, education and action-research organization for the advancement of health equity, progressive education and social innovation. The institute collaborates with researchers, field practitioners, community organizations, socially conscious companies—along with various levels of government and educational institutions—to ensure the Canadian health and human services sector remains accountable to the greater good of Canadian civil society rather than short-term professional, business or economic gains.
See also
References
Further reading
Brager, G., & Holloway, S. (1978). Changing human services organizations: Political and practice. New York, NY: The Free Press.
Bronfenbrenner, U. (2005). Making human beings human: Biological perspectives on human development. Thousand Oaks, CA: Sage Publications.
Cimbala, P.A., & Miller, R.M. (1999). The Freedman's Bureau and Reconstruction. New York, NY: Fordham University Press.
Colman, P. (2007). Breaking the chains: The crusade of Dorothea Lynde Dix. New York, NY: ASJA Press.
De Tocqueville, A. (2006). Democracy in America (G. Lawrence, Trans.). New York, NY: Harper Perennial Modern Classic (Original work published 1832).
Friedman, L. J. (2003). Giving and caring in early America 1601-1861. In L.J. Friedman, & M.D. McGarvie, Charity, philanthropy, and civility in American history (pp. 23–48). Cambridge, UK: Cambridge University Press.
Hasenfeld, Y. (1992). The nature of human service organizations. In Y. Hasenfeld, Human Services as Complex Organizations (pp. 3–23). Newbury Park, CA: Sage Publications.
Marshall, J. (2011). The life of George Washington. Fresno, CA: Edwards Publishing House.
Nellis, E.G., & Decker, A.D. (2001). The eighteenth-century records of the Boston overseers of the poor. Charlottesville, VA: University of Virginia Press.
Neukrug, E. (2016). Theory, practice, and trends in human services: An introduction (6th ed.). Belmont, CA: Cengage.
Slack, P. (1995). The English Poor Law, 1531-1782. Cambridge, UK: Cambridge University Press.
Trattner, W.I. (1999). From Poor Law to welfare state: A History of social welfare in America. New York, NY: The Free Press.
Academic disciplines
Community building
Human sciences | 0.76555 | 0.994612 | 0.761425 |
Social order | The term social order can be used in two senses: In the first sense, it refers to a particular system of social structures and institutions. Examples are the ancient, the feudal, and the capitalist social order. In the second sense, social order is contrasted to social chaos or disorder and refers to a stable state of society in which the existing social structure is accepted and maintained by its members. The problem of order or Hobbesian problem, which is central to much of sociology, political science and political philosophy, is the question of how and why it is that social orders exist at all.
Sociology
Thomas Hobbes is recognized as the first to clearly formulate the problem, to answer which he conceived the notion of a social contract.
Social theorists (such as Karl Marx, Émile Durkheim, Talcott Parsons, and Jürgen Habermas) have proposed different explanations for what a social order consists of, and what its real basis is. For Marx, it is the relations of production or economic structure which is the basis of social order. For Durkheim, it is a set of shared social norms. For Parsons, it is a set of social institutions regulating the pattern of action-orientation, which again are based on a frame of cultural values. For Habermas, it is all of these, as well as communicative action.
Principle of extensiveness
Another key factor concerning social order is the principle of extensiveness. This states the more norms and the more important the norms are to a society, the better these norms tie and hold together the group as a whole.
A good example of this is smaller religions based in the U.S., such as the Amish. Many Amish live together in communities and because they share the same religion and values, it is easier for them to succeed in upholding their religion and views because their way of life is the norm for their community.
Groups and networks
In every society, people belong to groups, such as businesses, families, churches, athletic groups, or neighborhoods. The structure inside of these groups mirrors that of the whole society. There are networks and ties between groups, as well as inside of each of the groups, which create social order.
Status groups
"Status groups" can be based on a person's characteristics such as race, ethnicity, sexual orientation, religion, caste, region, occupation, physical attractiveness, gender, education, age, etc. They are defined as "a subculture having a rather specific rank (or status) within the stratification system. That is, societies tend to include a hierarchy of status groups, some enjoying high ranking and some low." One example of this hierarchy is the prestige of a university professor compared to that of a garbage man.
A certain lifestyle usually distinguishes the members of different status groups. For example, around the holidays a Jewish family may celebrate Hanukkah while a Christian family may celebrate Christmas. Other cultural differences such as language and cultural rituals identify members of different status groups.
Smaller groups exist inside of one status group. For instance, one can belong to a status group based on one's race and a social class based on financial ranking. This may cause strife for the individual in this situation when they feel they must choose to side with either their status group or their social class. For example, a wealthy African American man who feels he has to take a side on an issue on which the opinions of poor African Americans and wealthy white Americans are divided and finds his class and status group opposed.
Values and norms
Values can be defined as "internal criteria for evaluation". Values are also split into two categories, there are individual values, which pertains to something that we think has worth and then there are social values. Social values are our desires modified according to ethical principles or according to the group, we associate with: friends, family, or co-workers.
Norms tell us what people ought to do in a given situation. Unlike values, norms are enforced externally – or outside of oneself. A society as a whole determines norms, and they can be passed down from generation to generation.
Power and authority
An exception to the idea of values and norms as social order-keepers is deviant behavior. Not everyone in a society abides by a set of personal values or the group's norms all the time. For this reason, it is generally deemed necessary for a society to have authority. The adverse opinion holds that the need for authority stems from social inequality.
In a class society, those who hold positions of power and authority are among the upper class. Norms differ for each class because the members of each class were raised differently and hold different sets of values. Tension can form, therefore, between the upper class and lower class when laws and rules are put in place that do not conform to the values of both classes.
Spontaneous order
The order does not necessarily need to be controlled by the government. Individuals pursuing self-interest can make predictable systems. These systems, being planned by more than one person, may actually be preferable to those planned by a single person. This means that predictability may be possible to achieve without a central government's control. These stable expectations do not necessarily lead to individuals behaving in ways that are considered beneficial to group welfare. Considering this, Thomas Schelling studied neighborhood racial segregation. His findings suggest that interaction can produce predictability, but it does not always increase social order. In his researching, he found that "when all individuals pursue their own preferences, the outcome is segregation rather than integration," as stated in "Theories of Social Order", edited by Michael Hechter and Christine Horne.
Social honor
Social honor can also be referred to as social status. It is considered the distribution of prestige or "the approval, respect, admiration, or deference a person or group is able to command by virtue of his or its imputed qualities or performances". The case most often is that people associate social honor with the place a person occupies with material systems of wealth and power. Since most of the society finds wealth and power desirable, they respect or envy people that have more than they do. When social honor is referred to as social status, it deals with the rank of a person within the stratification system. Status can be achieved, which is when a person position is gained on the basis of merit or in other words by achievement and hard work or it can be ascribed, which is when a person position is assigned to individuals or groups without regard for merit but because of certain traits beyond their control, such as race, sex, or parental social standing. An example of ascribed status is Kate Middleton who married a prince. An example of achieved status is Oprah Winfrey, an African American woman from poverty who worked her way to being a billionaire.
Attainment
Two different theories exist that explain and attempt to account for social order. The first theory is "order results from a large number of independent decisions to transfer individual rights and liberties to a coercive state in return for its guarantee of security for persons and their property, as well as its establishment of mechanisms to resolve disputes," as stated in Theories of Social Order by Hechter and Horne. The next theory is that "the ultimate source of social order as residing not in external controls but in a concordance of specific values and norms that individuals somehow have managed to internalize." also stated in Theories of Social Order by Hechter and Horne. Both arguments for how social order is attained are very different. One argues that it is achieved through outside influence and control, and the other argues that it can only be attained when the individual willingly follows norms and values that they have grown accustomed to and internalized. Weber's insistence on the importance of domination and symbolic systems in social life was retained by Pierre Bourdieu, who developed the idea of social orders, ultimately transforming it into a theory of fields.
See also
Anti-social behaviour
Antinomianism
Conformity
Norm (sociology)
Organic crisis
Social hierarchy
References
Further reading
Hobbes, T. Leviathan or The Matter, Forme and Power of a Common Wealth Ecclesiasticall and Civil.
Sociological terminology
Structural functionalism | 0.767437 | 0.992061 | 0.761345 |
MDA framework | In game design the Mechanics-Dynamics-Aesthetics (MDA) framework is a tool used to analyze games. It formalizes the properties of games by breaking them down into three components: Mechanics, Dynamics and Aesthetics. These three words have been used informally for many years to describe various aspects of games, but the MDA framework provides precise definitions for these terms and seeks to explain how they relate to each other and influence the player's experience.
Overview
Mechanics are the base components of the game — its rules, every basic action the player can take in the game, the algorithms and data structures in the game engine etc.
Dynamics are the run-time behavior of the mechanics acting on player input and "cooperating" with other mechanics.
Aesthetics are the emotional responses evoked in the player.
There are many types of aesthetics, including but not limited to the following eight stated by Hunicke, LeBlanc and Zubek:
Sensation (Game as sense-pleasure): Player enjoys memorable audio-visual effects.
Fantasy (Game as make-believe): Imaginary world.
Narrative (Game as drama): A story that drives the player to keep coming back
Challenge (Game as obstacle course): Urge to master something. Boosts a game's replayability.
Fellowship (Game as social framework): A community where the player is an active part of it. Almost exclusive for multiplayer games.
Discovery (Game as uncharted territory): Urge to explore game world.
Expression (Game as self-discovery): Own creativity. For example, creating a playable character resembling player's own appearance.
Submission (Game as pastime): Connection to the game, as a whole, despite constraints.
The paper also mentions a ninth kind of fun competition. The paper seeks to better specify terms such as 'gameplay' and 'fun', and extend the vocabulary of game studies, suggesting a non-exhaustive taxonomy of eight different types of play. The framework uses these definitions to demonstrate the incentivising and disincentivising properties of different dynamics on the eight subcategories of game use.
From the perspective of the designer, the mechanics generate dynamics which generate aesthetics. This relationship poses a challenge for the game designer as they are only able to influence the mechanics and only through them can be produced meaningful dynamics and aesthetics for the player.
The perspective of the player is the other way around. They experience the game through the aesthetics, which the game dynamics provide, which emerged from the mechanics.
Criticism
Despite its popularity, the original MDA framework has been criticized for several potential weaknesses. The eight kinds of fun comprise a rather arbitrary list of emotional targets, which lack fundamentals and how more types of emotional responses can be explored. It also has been challenged for neglecting many design aspects of games while focusing too much on game mechanics, and therefore not suitable for all types of games, including particularly gamified content or any type of experience-oriented design.
References
External links
Gamasutra page: http://www.gamasutra.com/blogs/TuckerAbbott/20101212/88611/MDA_Framework_Unconnected_Connectivity.php
6-11 Framework: https://www.academia.edu/1571687/THE_6-11_FRAMEWORK_A_NEW_METHODOLOGY_FOR_GAME_ANALYSIS_AND_DESIGN
Video game design
Video game development | 0.768791 | 0.99029 | 0.761326 |
Finishing school | A finishing school focuses on teaching young women social graces and upper-class cultural rites as a preparation for entry into society. The name reflects the fact that it follows ordinary school and is intended to complete a young woman's education by providing classes primarily on deportment, etiquette, and other non-academic subjects. The school may offer an intensive course, or a one-year programme. In the United States, a finishing school is sometimes called a charm school.
Graeme Donald claims that the educational ladies' salons of the late 19th century led to the formal finishing institutions common in Switzerland around that time. At the schools' peak, thousands of wealthy young women were sent to one of the dozens of finishing schools available. The primary goals of such institutions were to teach students the skills necessary to attract a good husband, and to become interesting socialites and wives.
The 1960s marked the decline of the finishing schools worldwide. This decline can be attributed to the shifting conceptions of women's role in society, competition from tertiary education, to succession issues within the typically family-run schools, and, sometimes, to commercial pressures driven by the high value of the properties that the schools occupied. The 1990s saw a revival of the finishing school, although the business model was radically altered.
By country
Switzerland
In the early 20th century, Switzerland was known for its private finishing schools. Most operated in the French-speaking cantons near Lake Geneva. The country was favoured by parents and guardians because of its reputation as a healthful environment, its multi-lingual and cosmopolitan aura, and the country's political stability.
Notable examples
The finishing schools that made Switzerland renowned for such institutions included:
Brillantmont (founded in 1882, now an international secondary-school that offers a 'grade 14' or graduate year of cultural studies) and Château Mont-Choisi (founded in 1885, closed in 1995 or 1996). Both were in Lausanne. The Maharani of Jaipur (1919-2009) studied at Brillantmont. In her memoir, she portrayed the time as a happy one, in which she wrote letters to her husband-to-be and pursued skiing and other sports. Actress Gene Tierney (1920-1991) also attended Brillantmont, speaking only French and holidaying with fellow-students in Norway and England.
was attended by Carla Bruni-Sarkozy, as well as by Princess Elena of Romania, Monique Lhuillier, actress Kitty Carlisle, Saudi scholar Mai Yamani and New York socialite Fabiola Beracasa-Beckman. It was one of the first Swiss finishing schools in the 19th century and in its early years a pioneer in secondary education. It was owned by an Italian family for five years prior to its closure (due to financial reasons) after over 100 years of educating women. Like many of its peers it adopted a serious secondary-education programme in the early 20th century.
Institut Alpin Videmanette in Rougemont was attended by Diana, Princess of Wales (1961-1997), Princess Irene of Greece and Denmark, Tiggy Legge-Bourke and Tamara Mellon. Lady Diana was sent to Alpin Videmanette by her father after failing all her O-Levels. She had met the Prince of Wales that year.
Mon Fertile in Tolochenaz, educated Queen Camilla and Ingrid Detter de Lupis Frankopan.
Institut Le Mesnil was attended by Queen Anne-Marie of Greece after completing her high-school education at the nearby Le Chatelard School, also in Montreux. Le Mesnil, owned by the Navarro family, closed in 2004. Le Chatelard today offers education in the American model of junior-high and high-school up to the age of 17. The organization today offers savoir vivre and culinary courses along the lines of the traditional finishing schools, but these supplement rather than replace academic subjects.
Le Manoir, in Lausanne, educated British secret agent Vera Atkins (1908–2000) and a sister of the king of the Albanians. It had a private beach and students were taken skiing in St Moritz.
Institut Villa Pierrefeu in Glion, Vaud, founded in 1954, is the last remaining traditional Swiss finishing school.
Great Britain
In London there were a number of schools in the 20th century including the Cygnet's House, the Monkey Club, St James and Lucie Clayton. The latter two merged in 2005 to become St James and Lucie Clayton College and were joined by a third, Queens (a secretarial college), to become the current Quest Professional, although the curriculum stopped offering any etiquette or protocol training, which was instead absorbed by a former Lucie Clayton tutor, who started The English Manner in 2001, when Lucie Clayton wound up. It is in London's Victoria district and offers business administration courses for students aged 16–25 years old. It is coeducational.
Eggleston Hall was located in County Durham and taught young ladies aged 16–20 from the 1960s until the late 1980s.
Evendine Court in Malvern began as a small school in the late 19th century teaching young ladies the duties of their families' household staff, by requiring them to complete domestic work themselves. Courses typically lasted six weeks. By 1900, the school had become popular. It extended to several buildings and included a working dairy farm to teach practical farming. During the Second World War it adopted more traditional finishing school subjects for young women unable to travel to Europe. Pupil numbers remained high until the mid-1990s, with a broader curriculum covering cordon bleu cookery, self presentation, and secretarial skills. It closed in 1998.
Winkfield Place in Ascot specialised in culinary expertise and moved to a new location in Surrey around 1990 when it joined with Moor Park Finishing School before Moor Park closed in 1998/99. Winkfield Place was founded by women's educator Constance Spry as a flower arranging and domestic science school and had an international reputation. It taught girls across three terms of an academic year with the possibility of studying Le Cordon Bleu courses with Rosemary Hume in a fourth term.
About a decade after these schools had closed, mostly by the end of the 20th century, public relations and image consultancy firms started to appear in London offering largely 1- or 2-day finishing courses and social skills at commercial rate fees which were proportionately far higher that those charged by the schools.
The old finishing schools were stand-alone organizations that lasted 15–50 years and were often family run. Curricula varied between schools based on the proprietor's philosophy, much like the British private school model of the 18th and 19th centuries. Some schools offered some O-level and A-level courses or recognised arts and languages certificates. They sometimes allowed pupils to retake a course they may not have passed at secondary school level. They often taught languages and commercially and/or domestically applicable skills, such as cooking, secretarial and later business studies with the aim of broadening the students horizons from formal schooling education.
United States
Through much of their history, American finishing schools emphasised social graces and de-emphasised scholarship: society encouraged a polished young lady to hide her intellectual prowess for fear of frightening away suitors. For instance, Miss Porter's School in 1843 advertised itself as Miss Porter's Finishing School for Young Ladies—even though its founder was a noted scholar offering a rigorous curriculum that educated the illustrious classicist Edith Hamilton.
Today, with a new cultural climate and a different attitude to the role of women, the situation has reversed: Miss Porter's School downplays its origins as a finishing school, and emphasises the rigour of its academics. Likewise, Finch College on Manhattan's Upper East Side was "one of the most famed of U.S. girls' finishing schools", but its last president chose to describe it as a liberal arts college, offering academics as rigorous as Barnard or Bryn Mawr. It closed in 1976.
The term finishing school is occasionally used, or misused, in American parlance to refer to certain small women's colleges, primarily on the East Coast, that were once known for preparing their female students for marriage. Since the 1960s, many of these schools have closed as a result of financial difficulties. These stemmed from changing societal norms, which made it easier for women to pursue academic and professional paths.
In literature
The Finishing School, a 2004 novel by Scottish author Muriel Spark, concerns 'College Sunrise', a present-day finishing school in Ouchy on the banks of Lake Geneva near Lausanne in Switzerland. Unlike the traditional finishing schools, the one in this novel is mixed-sex.
References
School types
Women and education | 0.764926 | 0.995283 | 0.761318 |
Cramming (education) | In education, cramming is the practice of working intensively to absorb large volumes of information in short amounts of time. It is also known as massed learning. It is often done by students in preparation for upcoming exams, especially just before them. Usually the student's priority is to obtain shallow recall suited to a superficial examination protocol, rather than to internalize the deep structure of the subject matter. Cramming is often discouraged by educators because the hurried coverage of material tends to result in poor long-term retention of material, a phenomenon often referred to as the spacing effect. Despite this, educators nevertheless widely persist in the use of superficial examination protocols, because these questions are easier to compose, quicker (and therefore cheaper for the institution) to grade, and objective on their own terms. When cramming, one attempts to focus only on studies and to forgo unnecessary actions or habits.
In contrast with cramming, active learning and critical thinking are two methods which emphasize the retention of material through the use of class discussions, study groups and individual thinking. Each has been cited as a more effective means of learning and retaining information as compared to cramming and memorization.
Prevalence
In Commonwealth countries, cramming usually occurs during the revision week (week before exams), also known as "swotvac" or "stuvak".
As a study technique
H.E. Gorst stated in his book, The Curse of Education, "as long as education is synonymous with cramming on an organized plan, it will continue to produce mediocrity."
Generally considered an undesirable study technique, cramming became more and more common among students both at the secondary and university levels. Pressure to perform well in the classroom and engage in extracurricular activities in addition to other responsibilities often results in the cramming method of studying. Cramming is a widely used study skill performed in preparation for an examination or other performance-based assessment.
Most common among high school and college-aged students, cramming is often used as a means of memorizing large amounts of information in a short amount of time. Students are often forced to cram after improper time utilization or in efforts to understand information shortly before being tested. Improper time management is usually the cause for last-minute cramming sessions, and many study techniques have been developed to help students succeed instead of cramming.
School performance
Teaching students to avoid last-minute cramming is a large area of concern for education professionals and profit for educational corporations and businesses. Learning and teaching study techniques that enhance retention as opposed to learning for a single examination is one of the core issues that plagues colleges and university academic advisors, and also adds to the stress of academic success for students. Ideally, proper study skills need to be introduced and practiced as early as possible in order for students to effectively learn positive study mechanisms.
According to William G. Sommer, students in a university system often adapt to the time-constraints that are placed upon them in college, and often use cramming to perform well on tests. In his article, Procrastination and Cramming: How Adept Students Ace the System, he states "Many students outwardly adapt to this system, however, engage in an intense and private ritual that comprises five aspects: calculated procrastination, preparatory anxiety, climactic cramming, nick-of-time deadline-making, and a secret, if often uncelebrated, victory. These adept students often find it difficult to admit others into their efficient program of academic survival."
Research
Hermann Ebbinghaus is considered a pioneer in research on cramming. He is the first person to compare between distributed learning and cramming.
See also
Active learning
Cram (game show)
Cram school
Critical thinking
Rote learning
Spaced repetition
Study skills
References
External links
Cramming on wikiHow
Cramming Techniques on TestTakingTips.com
The dangers of cramming for exams, Penn State University
Cram My Brain provides free cramming software and offers foreign language support (requires Adobe Flash)
Academic slang
Learning methods
Student culture | 0.769147 | 0.989777 | 0.761283 |
Heterodox economics | Heterodox economics refers to attempts at treating the subject of economics that reject the standard tools and methodologies of mainstream economics, which constitute the scientific method as applied to the field of economics. These tools include include:
An emphasis on making deductively valid arguments by explicitly formalizing assumptions into mathematical models;
The application of decision and game theory or cognitive science (by behavioral economists) to predict human behavior; and
The practice of empirically testing economic theories using either experimental or econometric data.
Groups typically classed as heterodox include the Austrian, ecological, Marxist-historical, post-autistic, and modern monetary approaches.
Heterodox economics tends to be identified, both by heterodox and mainstream economists, as a branch of the humanities, rather than the behavioral sciences, with many heterodox economists rejecting the possibility of applying the scientific method to the study of society. Four frames of analysis have been highlighted for their importance to heterodox thought: history, natural systems, uncertainty, and power.
History
In the mid-19th century, such thinkers as Auguste Comte, Thomas Carlyle, John Ruskin and Karl Marx made early critiques of orthodox economy. A number of heterodox schools of economic thought challenged the dominance of neoclassical economics after the neoclassical revolution of the 1870s. In addition to socialist critics of capitalism, heterodox schools in this period included advocates of various forms of mercantilism, such as the American School dissenters from neoclassical methodology such as the historical school, and advocates of unorthodox monetary theories such as social credit.
Physical scientists and biologists were the first individuals to use energy flows to explain social and economic development. Joseph Henry, an American physicist and first secretary of the Smithsonian Institution, remarked that the "fundamental principle of political economy is that the physical labor of man can only be ameliorated by… the transformation of matter from a crude state to an artificial condition...by expending what is called power or energy."
The rise, and absorption into the mainstream of Keynesian economics, which appeared to provide a more coherent policy response to unemployment than unorthodox monetary or trade policies, contributed to the decline of interest in these schools.
After 1945, the neoclassical synthesis of Keynesian and neoclassical economics resulted in a clearly defined mainstream position based on a division of the field into microeconomics (generally neoclassical but with a newly developed theory of market failure) and macroeconomics (divided between Keynesian and monetarist views on such issues as the role of monetary policy). Austrians and post-Keynesians who dissented from this synthesis emerged as clearly defined heterodox schools. In addition, the Marxist and institutionalist schools remained active but with limited acceptance or credibility.
Up to 1980 the most notable themes of heterodox economics in its various forms included:
rejection of the atomistic individual conception in favor of a socially embedded individual conception;
emphasis on time as an irreversible historical process;
reasoning in terms of mutual influences between individuals and social structures.
From approximately 1980 mainstream economics has been significantly influenced by a number of new research programs, including behavioral economics, complexity economics, evolutionary economics, experimental economics, and neuroeconomics. One key development has been an epistemic turn away from theory towards an empirically driven approach focused centrally on questions of causal inference. As a consequence, some heterodox economists, such as John B. Davis, proposed that the definition of heterodox economics has to be adapted to this new, more complex reality:
...heterodox economics post-1980 is a complex structure, being composed out of two broadly different kinds of heterodox work, each internally differentiated with a number of research programs having different historical origins and orientations: the traditional left heterodoxy familiar to most and the 'new heterodoxy' resulting from other science imports.
Rejection of neoclassical economics
There is no single "heterodox economic theory"; there are many different "heterodox theories" in existence. What they all share, however, is a rejection of the neoclassical orthodoxy as representing the appropriate tool for understanding the workings of economic and social life. The reasons for this rejection may vary. Some of the elements commonly found in heterodox critiques are listed below.
Criticism of the neoclassical model of individual behavior
One of the most broadly accepted principles of neoclassical economics is the assumption of the "rationality of economic agents". Indeed, for a number of economists, the notion of rational maximizing behavior is taken to be synonymous with economic behavior (Hirshleifer 1984). When some economists' studies do not embrace the rationality assumption, they are seen as placing the analyses outside the boundaries of the Neoclassical economics discipline (Landsberg 1989, 596). Neoclassical economics begins with the a priori assumptions that agents are rational and that they seek to maximize their individual utility (or profits) subject to environmental constraints. These assumptions provide the backbone for rational choice theory.
Many heterodox schools are critical of the homo economicus model of human behavior used in the standard neoclassical model. A typical version of the critique is that of Satya Gabriel:
Neoclassical economic theory is grounded in a particular conception of human psychology, agency or decision-making. It is assumed that all human beings make economic decisions so as to maximize pleasure or utility. Some heterodox theories reject this basic assumption of neoclassical theory, arguing for alternative understandings of how economic decisions are made and/or how human psychology works. It is possible to accept the notion that humans are pleasure seeking machines, yet reject the idea that economic decisions are governed by such pleasure seeking. Human beings may, for example, be unable to make choices consistent with pleasure maximization due to social constraints and/or coercion. Humans may also be unable to correctly assess the choice points that are most likely to lead to maximum pleasure, even if they are unconstrained (except in budgetary terms) in making such choices. And it is also possible that the notion of pleasure seeking is itself a meaningless assumption because it is either impossible to test or too general to refute. Economic theories that reject the basic assumption of economic decisions as the outcome of pleasure maximization are heterodox.
Shiozawa emphasizes that economic agents act in a complex world and therefore impossible for them to attain maximal utility point. They instead behave as if there are a repertories of many ready made rules, one of which they chose according to relevant situation.
Criticism of the neoclassical model of market equilibrium
In microeconomic theory, cost-minimization by consumers and by firms implies the existence of supply and demand correspondences for which market clearing equilibrium prices exist, if there are large numbers of consumers and producers. Under convexity assumptions or under some marginal-cost pricing rules, each equilibrium will be Pareto efficient: In large economies, non-convexity also leads to quasi-equilibria that are nearly efficient.
However, the concept of market equilibrium has been criticized by Austrians, post-Keynesians and others, who object to applications of microeconomic theory to real-world markets, when such markets are not usefully approximated by microeconomic models. Heterodox economists assert that micro-economic models rarely capture reality.
Mainstream microeconomics may be defined in terms of optimization and equilibrium, following the approaches of Paul Samuelson and Hal Varian. On the other hand, heterodox economics may be labeled as falling into the nexus of institutions, history, and social structure.
Most recent developments
Over the past two decades, the intellectual agendas of heterodox economists have taken a decidedly pluralist turn. Leading heterodox thinkers have moved beyond the established paradigms of Austrian, Feminist, Institutional-Evolutionary, Marxian, Post Keynesian, Radical, Social, and Sraffian economics—opening up new lines of analysis, criticism, and dialogue among dissenting schools of thought. This cross-fertilization of ideas is creating a new generation of scholarship in which novel combinations of heterodox ideas are being brought to bear on important contemporary and historical problems, such as socially grounded reconstructions of the individual in economic theory; the goals and tools of economic measurement and professional ethics; the complexities of policymaking in today's global political economy; and innovative connections among formerly separate theoretical traditions (Marxian, Austrian, feminist, ecological, Sraffian, institutionalist, and post-Keynesian) (for a review of post-Keynesian economics, see Lavoie (1992); Rochon (1999)).
David Colander, an advocate of complexity economics, argues that the ideas of heterodox economists are now being discussed in the mainstream without mention of the heterodox economists, because the tools to analyze institutions, uncertainty, and other factors have now been developed by the mainstream. He suggests that heterodox economists should embrace rigorous mathematics and attempt to work from within the mainstream, rather than treating it as an enemy.
Some schools of heterodox economic thought have also taken a transdisciplinary approach. Thermoeconomics is based on the claim that human economic processes are governed by the second law of thermodynamics. The posited relationship between economic theory, energy and entropy, has been extended further by systems scientists to explain the role of energy in biological evolution in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits of the various mechanisms for capturing and utilizing available energy to build biomass and do work.
Various student movements have emerged in response to the exclusion of heterodox economics in the curricula of most economics degrees. The International Student Initiative for Pluralist Economics was set up as an umbrella network for various smaller university groups such as Rethinking Economics to promote pluralism in economics, including more heterodox approaches.
Fields of heterodox economic thought
American Institutionalist School
Austrian economics #
Binary economics
Bioeconomics
Buddhist economics
Complexity economics
Co-operative economics
Distributism
Ecological economics §
Evolutionary economics # § (partly within mainstream economics)
Econophysics
Feminist economics # §
Georgism
Gift-based economics
Green Economics
Humanistic economics
Innovation Economics
Institutional economics # §
Islamic economics
Marxian economics #
Mutualism
Neuroeconomics
Participatory economics
Political Economy
Post-Keynesian economics § including Modern Monetary Theory and Circuitism
Post scarcity
Pluralism in economics
Resource-based economics – not to be confused with a resource-based economy
Real-world economics
Sharing economics
Socialist economics #
Social economics (partially heterodox usage)
Sraffian economics #
Technocracy (Energy Accounting)
Thermoeconomics
Mouvement Anti-Utilitariste dans les Sciences Sociales
# Listed in Journal of Economic Literature codes
scrolled to at JEL: B5 – Current Heterodox Approaches.
§ Listed in The New Palgrave Dictionary of Economics
Some schools in the social sciences aim to promote certain perspectives: classical and modern political economy; economic sociology and anthropology; gender and racial issues in economics; and so on.
Notable heterodox economists
Alfred Eichner
Alice Amsden
Aníbal Pinto
Anwar Shaikh
Bernard Lonergan
Bill Mitchell
Bryan Caplan
Carlota Perez
Carolina Alves
Celso Furtado
Dani Rodrik
David Harvey
Duncan Foley
E. F. Schumacher
Edward Nell
Esther Dweck
F.A. Hayek
Frank Stilwell
Franklin Serrano
Frederic S. Lee
Frederick Soddy
G.L.S. Shackle
Hans Singer
Ha-Joon Chang
Heinz Kurz
Henry George
Herman Daly
Hyman Minsky
Jack Amariglio
Jeremy Rifkin
Joan Robinson
John Bellamy Foster
John Komlos
Joseph Schumpeter
Karl Marx
Kate Raworth
Lance Taylor
Ludwig Lachmann
Ludwig von Mises
Lyndon Larouche
Maria da Conceição Tavares
Mariana Mazzucato
Mason Gaffney
Michael Albert
Michael Hudson
Michael Perelman
Michał Kalecki
Murray Rothbard
Mushtaq Khan
Nelson Barbosa
Nicholas Georgescu-Roegen
Nicolaus Tideman
Paul A. Baran
Paul Cockshott
Paul Sweezy
Peter Navarro
Piero Sraffa
Rania Antonopoulos
Raúl Prebisch
Richard D. Wolff
Robin Hahnel
Ruy Mauro Marini
Simon Zadek
Stephanie Kelton
Stephen Resnick
Theotônio dos Santos
Thorstein Veblen
Tony Lawson
Yanis Varoufakis
Yusif Sayigh
See also
Association for Evolutionary Economics
Chinese economic reform
Degrowth
EAEPE
Foundations of Real-World Economics
Happiness economics
Humanistic economics
Kinetic exchange models of markets
Pluralism in economics
Post-autistic economics
Post-growth
Real-world economics review
Real-world economics
Notes
References
Further reading
Articles
Books
Jo, Tae-Hee, Chester, Lynne, and D'Ippoliti. eds. 2017. The Routledge Handbook of Heterodox Economics. London and New York: Routledge. .
Gerber, Julien-Francois and Steppacher, Rolf, ed., 2012. Towards an Integrated Paradigm in Heterodox Economics: Alternative Approaches to the Current Eco-Social Crises. Palgrave Macmillan.
Lee, Frederic S. 2009. A History of Heterodox Economics Challenging the Mainstream in the Twentieth Century. London and New York: Routledge. 2009
Harvey, John T. and Garnett Jr., Robert F., ed., 2007. Future Directions for Heterodox Economics, Series Advances in Heterodox Economics, The University of Michigan Press.
What Every Economics Student Needs to Know.. Routledge 2014.
McDermott, John, 2003. Economics in Real Time: A Theoretical Reconstruction, Series Advances in Heterodox Economics, The University of Michigan Press.
Rochon, Louis-Philippe and Rossi, Sergio, editors, 2003. Modern Theories of Money: The Nature and Role of Money in Capitalist Economies. Edward Elgar Publishing.
{{cite news|title=The Wide, Wide World Of Wealth (The New Palgrave: A Dictionary of Economics'''. Edited by John Eatwell, Murray Milgate and Peter Newman. Four volumes. 4,103 pp. New York: Stockton Press.)|last=Solow|first=Robert M.|author-link=Robert M. Solow|date=20 March 1988|journal=New York Times|url=https://www.nytimes.com/1988/03/20/books/the-wide-wide-world-of-wealth.html?scp=1}}
Stilwell, Frank., 2011. Political Economy: The Contest of Economic Ideas. Oxford University Press.
Foundations of Real-World Economics: What Every Economics Student Needs to Know, 2nd edition, Abingdon-on-Thames, UK: Routledge: 2019.
Articles, conferences, papers
Lavoie, Marc, 2006. Do Heterodox Theories Have Anything in Common? A Post-Keynesian Point of View.
Lawson, Tony, 2006. "The Nature of Heterodox Economics," Cambridge Journal of Economics, 30(4), pp. 483–505. Pre-publication copy.
Journals
Evolutionary and Institutional Economics Review
Journal of Institutional Economics''
Cambridge Journal of Economics
Real-world economics review
International Journal of Pluralism and Economics Education
Review of Radical Political Economy
External links
Association for Heterodox Economics
Heterodox Economics Newsletter
Heterodox Economics Directory (Graduate and Undergraduate Programs, Journals, Publishers and Book Series, Associations, Blogs, and Institutions and Other Web Sites)
Association for Evolutionary Economics (AFEE)
International Confederation of Associations for Pluralism in Economics (ICAPE)
Union for Radical Political Economics (URPE)
Association for Social Economics (ASE)
Post-Keynesian Economics Study Group (PKSG)
^
^
Political economy | 0.764899 | 0.995262 | 0.761274 |
Outcome-based education | Outcome-based education or outcomes-based education (OBE) is an educational theory that bases each part of an educational system around goals (outcomes). By the end of the educational experience, each student should have achieved the goal. There is no single specified style of teaching or assessment in OBE; instead, classes, opportunities, and assessments should all help students achieve the specified outcomes. The role of the faculty adapts into instructor, trainer, facilitator, and/or mentor based on the outcomes targeted.
Outcome-based methods have been adopted in education systems around the world, at multiple levels.
Australia and South Africa adopted OBE policies from the 1990s to the mid 2000s, but were abandoned in the face of substantial community opposition. The United States has had an OBE program in place since 1994 that has been adapted over the years. In 2005, Hong Kong adopted an outcome-based approach for its universities. Malaysia implemented OBE in all of their public schools systems in 2008. The European Union has proposed an education shift to focus on outcomes, across the EU. In an international effort to accept OBE, The Washington Accord was created in 1989; it is an agreement to accept undergraduate engineering degrees that were obtained using OBE methods.
Differences from traditional education methods
OBE can primarily be distinguished from traditional education method by the way it incorporates three elements: theory of education, a systematic structure for education, and a specific approach to instructional practice. It organizes the entire educational system towards what are considered essential for the learners to successfully do at the end of their learning experiences. In this model, the term "outcome" is the core concept and sometimes used interchangeably with the terms "competency, "standards, "benchmarks", and "attainment targets". OBE also uses the same methodology formally and informally adopted in actual workplace to achieve outcomes. It focuses on the following skills when developing curricula and outcomes:
Life skills;
Basic skills;
Professional and vocational skills;
Intellectual skills;
Interpersonal and personal skills.
In a regional/local/foundational/electrical education system, students are given grades and rankings compared to each other. Content and performance expectations are based primarily on what was taught in the past to students of a given age of 12-18. The goal of this education was to present the knowledge and skills of an older generation to the new generation of students, and to provide students with an environment in which to learn. The process paid little attention (beyond the classroom teacher) to whether or not students learn any of the material.
Benefits of OBE
Clarity
The focus on outcomes creates a clear expectation of what needs to be accomplished by the end of the course. Students will understand what is expected of them and teachers will know what they need to teach during the course. Clarity is important over years of schooling and when team teaching is involved. Each team member, or year in school, will have a clear understanding of what needs to be accomplished in each class, or at each level, allowing students to progress. Those designing and planning the curriculum are expected to work backwards once an outcome has been decided upon; they must determine what knowledge and skills will be required to reach the outcome.
Flexibility
With a clear sense of what needs to be accomplished, instructors will be able to structure their lessons around the student’s needs. OBE does not specify a specific method of instruction, leaving instructors free to teach their students using any method. Instructors will also be able to recognize diversity among students by using various teaching and assessment techniques during their class. OBE is meant to be a student-centered learning model. Teachers are meant to guide and help the students understand the material in any way necessary, study guides, and group work are some of the methods instructors can use to facilitate students learning.
Comparison
OBE can be compared across different institutions. On an individual level, institutions can look at what outcomes a student has achieved to decide what level the student would be at within a new institution. On an institutional level, institutions can compare themselves, by checking to see what outcomes they have in common, and find places where they may need improvement, based on the achievement of outcomes at other institutions. The ability to compare easily across institutions allows students to move between institutions with relative ease. The institutions can compare outcomes to determine what credits to award the student. The clearly articulated outcomes should allow institutions to assess the student’s achievements rapidly, leading to increased movement of students. These outcomes also work for school to work transitions. A potential employer can look at records of the potential employee to determine what outcomes they have achieved. They can then determine if the potential employee has the skills necessary for the job.
Involvement
Student involvement in the classroom is a key part of OBE. Students are expected to do their own learning, so that they gain a full understanding of the material. Increased student involvement allows students to feel responsible for their own learning, and they should learn more through this individual learning. Other aspects of involvement are parental and community, through developing curriculum, or making changes to it. OBE outcomes are meant to be decided upon within a school system, or at a local level. Parents and community members are asked to give input in order to uphold the standards of education within a community and to ensure that students will be prepared for life after school.
Drawbacks of OBE
Definition
The definitions of the outcomes decided upon are subject to interpretation by those implementing them. Across different programs or even different instructors outcomes could be interpreted differently, leading to a difference in education, even though the same outcomes were said to be achieved. By outlining specific outcomes, a holistic approach to learning is lost. Learning can find itself reduced to something that is specific, measurable, and observable. As a result, outcomes are not yet widely recognized as a valid way of conceptualizing what learning is about.
Assessment problems
When determining if an outcome has been achieved, assessments may become too mechanical, looking only to see if the student has acquired the knowledge. The ability to use and apply the knowledge in different ways may not be the focus of the assessment. The focus on determining if the outcome has been achieved leads to a loss of understanding and learning for students, who may never be shown how to use the knowledge they have gained. Instructors are faced with a challenge: they must learn to manage an environment that can become fundamentally different from what they are accustomed to. In regards to giving assessments, they must be willing to put in the time required to create a valid, reliable assessment that ideally would allow students to demonstrate their understanding of the information, while remaining objective.
Generality
Education outcomes can lead to a constrained nature of teaching and assessment. Assessing liberal outcomes such as creativity, respect for self and others, responsibility, and self-sufficiency, can become problematic. There is not a measurable, observable, or specific way to determine if a student has achieved these outcomes. Due to the nature of specific outcomes, OBE may actually work against its ideals of serving and creating individuals that have achieved many outcomes.
Involvement
Parental involvement, as discussed in the benefits section can also be a drawback, if parents and community members are not willing to express their opinions on the quality of the education system, the system may not see a need for improvement, and not change to meet student’s needs. Parents may also become too involved, requesting too many changes, so that important improvements get lost with other changes that are being suggested. Instructors will also find that their work is increased; they must work to first understand the outcome, then build a curriculum around each outcome they are required to meet. Instructors have found that implementing multiple outcomes is difficult to do equally, especially in primary school. Instructors will also find their work load increased if they chose to use an assessment method that evaluates students holistically.
Adoption and removal
Australia
In the early 1990s, all states and territories in Australia developed intended curriculum documents largely based on OBE for their primary and secondary schools. Criticism arose shortly after implementation. Critics argued that no evidence existed that OBE could be implemented successfully on a large scale, in either the United States or Australia. An evaluation of Australian schools found that implementing OBE was difficult. Teachers felt overwhelmed by the amount of expected achievement outcomes. Educators believed that the curriculum outcomes did not attend to the needs of the students or teachers. Critics felt that too many expected outcomes left students with shallow understanding of the material. Many of Australia’s current education policies have moved away from OBE and towards a focus on fully understanding the essential content, rather than learning more content with less understanding.
Western Australia
Officially, an agenda to implement Outcomes Based Education took place between 1992 and 2008 in Western Australia. Dissatisfaction with OBE escalated from 2004 when the government proposed the implementation of an alternative assessment system using OBE 'levels' for years 11 and 12. With government school teachers not permitted to publicly express dissatisfaction with the new system, a community lobby group called PLATO as formed in June 2004 by high school science teacher Marko Vojkavi. Teachers anonymously expressed their views through the website and online forums, with the website quickly became one of the most widely read educational websites in Australia with more 180,000 hits per month and contained an archive of more than 10,000 articles on the subject of OBE implementation. In 2008 it was officially abandoned by the state government with Minister for Education Mark McGowan remarking that the 1990s fad "to dispense with syllabus" was over.
European Union
In December 2012, the European Commission presented a new strategy to decrease youth unemployment rate, which at the time was close to 23% across the European Union . The European Qualifications Framework calls for a shift towards learning outcomes in primary and secondary schools throughout the EU. Students are expected to learn skills that they will need when they complete their education. It also calls for lessons to have a stronger link to employment through work-based learning (WBL). Work-based learning for students should also lead to recognition of vocational training for these students. The program also sets goals for learning foreign languages, and for teachers continued education. It also highlights the importance of using technology, especially the internet, in learning to make it relevant to students.
Hong Kong
Hong Kong’s University Grants Committee adopted an outcomes-based approach to teaching and learning in 2005. No specific approach was created leaving universities to design the approach themselves. Universities were also left with a goal of ensuring an education for their students that will contribute to social and economic development, as defined by the community in which the university resides. With little to no direction or feedback from the outside universities will have to determine if their approach is achieving its goals on their own.
Malaysia
OBE has been practiced in Malaysia since the 1950s; however, as of 2008, OBE is being implemented at all levels of education, especially tertiary education. This change is a result of the belief that the education system used prior to OBE inadequately prepared graduates for life outside of school. The Ministry of Higher Education has pushed for this change because of the number of unemployed graduates. Findings in 2006 state that nearly 70% of graduates from public universities were considered unemployed. A further study of those graduates found that they felt they lacked, job experience, communication skills, and qualifications relevant to the current job market. The Malaysian Qualifications Agency (MQA) was created to oversee quality of education and to ensure outcomes were being reached. The MQA created a framework that includes eight levels of qualification within higher education, covering three sectors; skills, vocational and technical, and academic. Along with meeting the standards set by the MQA, universities set and monitor their own outcome expectations for students
South Africa
OBE was introduced to South Africa in the late 1990s by the post-apartheid government as part of its Curriculum 2005 program. , Initial support for the program derived from anti-apartheid education policies. The policy also gained support from the labor movements that borrowed ideas about competency-based education, and Vocational education from New Zealand and Australia, as well as the labor movement that critiqued the apartheid education system. With no strong alternative proposals, the idea of outcome-based education, and a national qualification framework, became the policy of the African National Congress government. This policy was believed to be a democratization of education, people would have a say in what they wanted the outcomes of education to be. It was also believed to be a way to increase education standards and increase the availability of education. The National Qualifications Framework (NQF) went into effect in 1997. In 2001 people realized that the intended effects were not being seen. By 2006 no proposals to change the system had been accepted by the government, causing a hiatus of the program. The program came to be viewed as a failure and a new curriculum improvement process was announced in 2010, slated to be implemented between 2012 and 2014.
United States
In 1983, a report from the National Commission on Excellence in Education declared that American education standards were eroding, that young people in the United States were not learning enough. In 1989, President Bush and the nation’s governors set national goals to be achieved by the year 2000. Goals 2000: Educate America Act was signed in March 1994. The goal of this new reform was to show that results were being achieved in schools. In 2001, the No Child Left Behind Act took the place of Goals 2000. It mandated certain measurements as a condition of receiving federal education funds. States are free to set their own standards, but the federal law mandates public reporting of math and reading test scores for disadvantaged demographic subgroups, including racial minorities, low-income students, and special education students. Various consequences for schools that do not make "adequate yearly progress" are included in the law. In 2010, President Obama proposed improvements for the program. In 2012, the U.S. Department of Education invited states to request flexibility waivers in exchange for rigorous plans designed to improve students' education in the state.
Sri Lanka
Although it is unclear when the OBE was started in educational practices in Sri Lanka, In 2004, the UGC jointly with the CVCD, established a Quality Assurance and Accreditation (QAA) Unit (which was subsequently renamed as the QAA Council in 2005) started the first cycle of reviews based on the “Quality Assurance Handbook for Sri Lankan Universities 2002”. In the Handbook, emphasis is given on the Intended Learning Outcomes as one of the main measures in evaluating the study programmes, Subsequently, based on the feedback, the manual was revised. In the Revised Manual the Ministry of Higher Education (MoHE) proposed that Outcome-Based Education (OBE) together with the Student-Centred Learning (SCL) concepts be introduced within the higher education study programmes. Subsequently, almost all the manuals developed in this regard included the OBE, and more objective measures were introduced to measure them when reviewing.
Today, all the teacher training programmes emphasize the training on OBE concepts such as the Certificate of Teaching in Higher Education (CTHE) run by the Universities and Postgraduate degree programme in Medical Education run by the Postgradute Institute of Medicine (PGIM).
As the QAC of the UGC has introduced a mechanism to include OBE concepts, and it is being frequently monitored, almost all the degree programmes in Sri Lanka are now adopting the OBE concepts into their curricula.
India
India has become a permanent signatory member of the Washington Accord on 13 June 2014. India has started implementing OBE in higher technical education like diploma and undergraduate programmes. The National Board of Accreditation, a body for promoting international quality standards for technical education in India has started accrediting only the programmes running with OBE from 2013.
The National Board of Accreditation mandates establishing a culture of outcomes-based education in institutions that offer Engineering, Pharmacy, Management programs. Outcomes analysis and using the analytical reports to find gaps and carry out continuous improvement is essential cultural shift from how the above programs are run when OBE culture is not embraced. Outcomes analysis requires huge amount of data to be churned and made available at any time, anywhere. Such an access to scalable, accurate, automated and real-time data analysis is possible only if the institute adopts either excelsheet based measurement system or some kind of home-grown or commercial software system. It is observed that excelsheet based measurement and analysis system doesn't scale when the stakeholders want to analyse longitudinal data.
See also
Washington Accord
References
Further reading
Castleberry, Thomas. 2006. "Student Learning Outcomes Assessment within the Texas State University MPA Program." Applied Research Project. Texas State University.
Sunseri, Ron. 1994. O.B.E. [i.e.] Outcome Based Education: Understanding the Truth about Education Reform. Sisters, Ore.: Multnomah Books. 235 p.
Education reform
Curricula
Philosophy of education
Pedagogy
Standards-based education
Organizational performance management | 0.767323 | 0.992111 | 0.76127 |
Cultural learning | Cultural learning is the way a group of people or animals within a society or culture tend to learn and pass on information. Learning styles can be greatly influenced by how a culture socializes with its children and young people. Cross-cultural research in the past fifty years has primarily focused on differences between Eastern and Western cultures. Some scholars believe that cultural learning differences may be responses to the physical environment in the areas in which a culture was initially founded. These environmental differences include climate, migration patterns, war, agricultural suitability, and endemic pathogens. Cultural evolution, upon which cultural learning is built, is believed to be a product of only the past 10,000 years and to hold little connection to genetics.
Overview
Cultural learning allows individuals to acquire skills that they would be unable to do independently over the course of their lifetimes. Cultural learning is believed to be particularly important for humans. Humans are weaned at an early age compared to the emergence of adult dentition. The immaturity of dentition and the digestive system, the time required for growth of the brain, and the rapid skeletal growth needed for the young to reach adult height and strength mean that children have special digestive needs and are dependent on adults for a long period of time. This time of dependence also allows time for cultural learning to occur before passage into adulthood.
The basis of cultural learning is based on; people create, remember, and deal with ideas. They understand and apply specific systems of symbolic meaning. Cultures have been compared to sets of control mechanisms, plans, recipes, rules, or instructions. Cultural differences have been found in academic motivation, achievement, learning style, conformity, and compliance. Cultural learning is dependent on innovation, or the ability to create new responses to the environment and the ability to communicate or imitate the behaviour of others. Animals that are able to solve problems and imitate the behaviour of others are therefore able to transmit information across generations.
Cass Sunstein described in 2007 how Wikipedia moves us past the rigid limits of socialist planning that Friedrich Hayek attacked on the grounds that "no planner could possibly obtain the dispersed bits of information held by individual members of society. Hayek insisted that the knowledge of individuals, taken as a whole, is far greater than that of any commission or board, however diligent and expert."
Examples
An example of cultural transmission can be seen in post-World War II Japan during the American occupation of the country. There were political, economic, and social changes in Japan influenced by America. Some changes include changes to their constitution, reforms, and the consumption of media, which were influenced by American occupiers. The occupation of Japan by the Japanese turned into a strong link between nations. Over time, Japanese culture began to accept American touchstones like jazz and baseball, while Americans were introduced to Japanese cuisine and entertainment.
A modern approach to cultural transmission would be that of the internet. One example would be millennials, who "are both products of their culture as well as influencers." Millennials are often the ones teaching older generations how to navigate the web. The teacher has to accommodate to the learning process of the student, in this case an older generation student, in order to transmit the information fluently and in a manner that is easier to understand. This goes hand in hand with the Communication Accommodation Theory, which "elaborates the human tendency to adjust their behaviour while interacting." The end result would be that, with the help of someone else, people are able to share their newly acquired skills among people in their culture, which was not possible before.
Humans also tend to follow "communicative" ways of learning, as seen in a study by Hanna Marno, a researcher at the International School for Advanced Studies. In the study, infants followed an adult's action of pressing a button to light up a lamp based on the adult's "non-verbal (eye contact) and verbal cues."
In non-human animals
Enculturation can also be used to describe the raising of an animal in which the animal acquires traits and skills that would not otherwise be acquired if it were raised by another of its own species.
Cultural learning is dependent on innovation, or the ability to create new responses to the environment and the ability to communicate or imitate the behavior of others. Animals that are able to solve problems and imitate the behavior of others are therefore able to transmit information across generations. A wide variety of social animals learn from other members of their group or pack. Wolves, for example, learn multiple hunting strategies from the other pack members. A large number of bird species also engage in cultural learning; such learning is critical for the survival of some species. Dolphins also pass on knowledge about tool use.
See also
Educational anthropology
Intercultural competence
Intercultural communication principles
Socialization
Dual inheritance theory
References
Inline
General
van Shaik, Carel P. & Burkart, Judith M. (2011). "Social learning and evolution: the cultural intelligence hypothesis". Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1567), 1008-1016
Chang, Lei; Mak, Miranda C. K.; Li, Tong; Wu, Bao Pei; Chen, Bin Bin; & Lu, Hui Jing (2011). "Cultural Adaptations to Environmental Variability: An Evolutionary Account of East–West Differences" (PDF). Educational Psychology Review, 23(1), 99-129. doi:10.1007/s10648-010-9149-0
Lehmann, L. L., Feldman, M. W., & Kaeuffer, R. R. (2010). "Cumulative cultural dynamics and the co-evolution of cultural innovation and transmission: an ESS model for panmictic and structured populations". Journal of Evolutionary Biology, 23(11), 2356-2369. doi:10.1111/j.1420-9101.2010.02096.x
MacDonald, K. (2007). "Cross-cultural Comparison of Learning in Human Hunting". Human Nature, 18(4), 386-402. doi:10.1007/s12110-007-9019-8
"The American Occupation of Japan, 1945-1952 | Asia for Educators | Columbia University".
Applied learning
Human communication | 0.787947 | 0.966119 | 0.761251 |
Early childhood education | Early childhood education (ECE), also known as nursery education, is a branch of education theory that relates to the teaching of children (formally and informally) from birth up to the age of eight. Traditionally, this is up to the equivalent of third grade. ECE is described as an important period in child development.
ECE emerged as a field of study during the Enlightenment, particularly in European countries with high literacy rates. It continued to grow through the nineteenth century as universal primary education became a norm in the Western world. In recent years, early childhood education has become a prevalent public policy issue, as funding for preschool and pre-K is debated by municipal, state, and federal lawmakers. Governing entities are also debating the central focus of early childhood education with debate on developmental appropriate play versus strong academic preparation curriculum in reading, writing, and math. The global priority placed on early childhood education is underscored with targets of the United Nations Sustainable Development Goal 4. , however, "only around 4 in 10 children aged 3 and 4 attend early childhood education" around the world. Furthermore, levels of participation vary widely by region with, "around 2 in 3 children in Latin American and the Caribbean attending ECE compared to just under half of children in South Asia and only 1 in 4 in sub-Saharan Africa".
ECE is also a professional designation earned through a post-secondary education program. For example, in Ontario, Canada, the designations ECE (Early Childhood Educator) and RECE (Registered Early Childhood Educator) may only be used by registered members of the College of Early Childhood Educators, which is made up of accredited child care professionals who are held accountable to the College's standards of practice.
Research shows that early-childhood education has substantial positive short- and long-term effects on the children who attend such education, and that the costs are dwarfed by societal gains of the education programs.
Theories of child development
The Developmental Interaction Approach is based on the theories of Jean Piaget, Erik Erikson, John Dewey, and Lucy Sprague Mitchell. The approach focuses on learning through discovery.
Jean Jacques Rousseau recommended that teachers should exploit individual children's interests to make sure each child obtains the information most essential to his personal and individual development. The five developmental domains of childhood development include:
Physical: the way in which a child develops biological and physical functions, including eyesight and motor skills
Social: the way in which a child interacts with others Children develop an understanding of their responsibilities and rights as members of families and communities, as well as an ability to relate to and work with others.
Emotional: the way in which a child creates emotional connections and develops self-confidence. Emotional connections develop when children relate to other people and share feelings.
Language: the way in which a child communicates, including how they present their feelings and emotions, both to other people and to themselves. At 3 months, children employ different cries for different needs. At 6 months they can recognize and imitate the basic sounds of spoken language. In the first 3 years, children need to be exposed to communication with others in order to pick up language. "Normal" language development is measured by the rate of vocabulary acquisition.
Cognitive skills: the way in which a child organizes information. Cognitive skills include problem solving, creativity, imagination and memory. They embody the way in which children make sense of the world. Piaget believed that children exhibit prominent differences in their thought patterns as they move through the stages of cognitive development: sensorimotor period, the pre-operational period, and the operational period.
To meet those developmental domains, a child has a set of needs that must be met for learning. Maslow's hierarchy of needs showcases the different levels of needs that must be met the chart to the right showcases these needs.
Froebel's play theory
Friedrich Froebel was a German Educator that believed in the idea of children learning through play. Specifically, he said, "play is the highest expression of human development in childhood, for it alone is the free expression of what is in the child's soul." Froebel believed that teachers should act as a facilitators and supporters for the students's play, rather than an authoritative, disciplinary figure. He created educational open-ended toys that he called "gifts" and "occupations" that were designed to encourage self expression and initiation.
Maria Montessori's theory
Maria Montessori was an Italian physician that, based on her observations of young children in classrooms, developed a method of education that focused on independence. In Montessori education, a typical classroom is made up of students of different ages and curriculum is based on the students' developmental stage, which Montessori called the four planes of development.
Montessori's Four Planes of Development:
The first plane (birth to age 6): During this stage, children soak up information about the world around them quickly, which is why Montessori refers to it as the "absorbent mind". Physical independence, such as completing tasks independently, is a main focus of the child at this time and children's individual personalities begin to form and develop.
The second plane (Ages 6–12): During this stage, children also focus on independence, but intellectual rather than physical. Montessori classrooms use what is called "cosmic education" during this stage, which emphasizes children building on their understanding of the world, their place in it, and how everything is interdependent. Children in this plane also begin to develop abstract and moral thinking.
The third plane (Ages 12–18): During this stage, adolescents shift to focus on emotional independence and on the self. Moral values, critical thinking, and self-identity are explored and strengthened.
The fourth plane (Ages 18–24): During this last stage, focus shifts to financial independence. Young adults in this plane begin to solidify their personal beliefs, identity, and role in the world.
Vygotsky's socio-cultural learning theory
Russian psychologist Lev Vygotsky proposed a "socio-cultural learning theory" that emphasized the impact of social and cultural experiences on individual thinking and the development of mental processes. Vygotsky's theory emerged in the 1930s and is still discussed today as a means of improving and reforming educational practices.
In Vygotsky's theories of learning, he also postulated the theory of the zone of proximal development. This theory ties in with children building off prior knowledge and gaining new knowledge related to skills they already have. This theory further describes how new knowledge or skills are taken in if they are not fully learned but are starting to emerge. A teacher or older friend lends support to a child learning a skill, be it building a block castle, tying a shoe, or writing one's name. As the child becomes more capable of the steps of the activity, the adult or older child withdraws supports gradually, until the child is competent completing the process on his/her own. This is done within that activity's zone—the distance between where the child is, and where he potentially will be. In each zone of proximal development, they build on skills and grow by learning more skills in their proximal development range. They build on the skills by being guided by teachers and parents. They must build from where they are in their zone of proximal development.
Vygotsky argued that since cognition occurs within a social context, our social experiences shape our ways of thinking about and interpreting the world. People such as parents, grandparents, and teachers play the roles of what Vygotsky described as knowledgeable and competent adults. Although Vygotsky predated social constructivists, he is commonly classified as one. Social constructivists believe that an individual's cognitive system is a resditional learning time. Vygotsky advocated that teachers facilitate rather than direct student learning. Teachers should provide a learning environment where students can explore and develop their learning without direct instruction. His approach calls for teachers to incorporate students' needs and interests. It is important to do this because students' levels of interest and abilities will vary and there needs to be differentiation.
However, teachers can enhance understandings and learning for students. Vygotsky states that by sharing meanings that are relevant to the children's environment, adults promote cognitive development as well. Their teachings can influence thought processes and perspectives of students when they are in new and similar environments. Since Vygotsky promotes more facilitation in children's learning, he suggests that knowledgeable people (and adults in particular), can also enhance knowledges through cooperative meaning-making with students in their learning, this can be done through the zone of proximal development by guiding children's learning or thinking skills . Vygotsky's approach encourages guided participation and student exploration with support. Teachers can help students achieve their cognitive development levels through consistent and regular interactions of collaborative knowledge-making learning processes.
Piaget's constructivist theory
Jean Piaget's constructivist theory gained influence in the 1970s and '80s. Although Piaget himself was primarily interested in a descriptive psychology of cognitive development, he also laid the groundwork for a constructivist theory of learning. Piaget believed that learning comes from within: children construct their own knowledge of the world through experience and subsequent reflection. He said that "if logic itself is created rather than being inborn, it follows that the first task of education is to form reasoning." Within Piaget's framework, teachers should guide children in acquiring their own knowledge rather than simply transferring knowledge.
According to Piaget's theory, when young children encounter new information, they attempt to accommodate and assimilate it into their existing understanding of the world. Accommodation involves adapting mental schemas and representations to make them consistent with reality. Assimilation involves fitting new information into their pre-existing schemas. Through these two processes, young children learn by equilibrating their mental representations with reality. They also learn from mistakes.
A Piagetian approach emphasizes experiential education; in school, experiences become more hands-on and concrete as students explore through trial and error. Thus, crucial components of early childhood education include exploration, manipulating objects, and experiencing new environments. Subsequent reflection on these experiences is equally important.
Piaget's concept of reflective abstraction was particularly influential in mathematical education. Through reflective abstraction, children construct more advanced cognitive structures out of the simpler ones they already possess. This allows children to develop mathematical constructs that cannot be learned through equilibration – making sense of experiences through assimilation and accommodation – alone.
According to Piagetian theory, language and symbolic representation is preceded by the development of corresponding mental representations. Research shows that the level of reflective abstraction achieved by young children was found to limit the degree to which they could represent physical quantities with written numerals. Piaget held that children can invent their own procedures for the four arithmetical operations, without being taught any conventional rules.
Piaget's theory implies that computers can be a great educational tool for young children when used to support the design and construction of their projects. McCarrick and Xiaoming found that computer play is consistent with this theory. However, Plowman and Stephen found that the effectiveness of computers is limited in the preschool environment; their results indicate that computers are only effective when directed by the teacher. This suggests, according to the constructivist theory, that the role of preschool teachers is critical in successfully adopting computers as they existed in 2003.
Kolb's experiential learning theory
David Kolb's experiential learning theory, which was influenced by John Dewey, Kurt Lewin and Jean Piaget, argues that children need to experience things to learn: "The process whereby knowledge is created through the transformation of experience. Knowledge results from the combinations of grasping and transforming experience." The experimental learning theory is distinctive in that children are seen and taught as individuals. As a child explores and observes, teachers ask the child probing questions. The child can then adapt prior knowledge to learning new information.
Kolb breaks down this learning cycle into four stages: concrete experience, reflective observation, abstract conceptualization, and active experimentation. Children observe new situations, think about the situation, make meaning of the situation, then test that meaning in the world around them.
Practical implications of early childhood education
In recent decades, studies have shown that early childhood education is critical in preparing children to enter and succeed in the (grade school) classroom, diminishing their risk of social-emotional mental health problems and increasing their self-sufficiency later in their lives. In other words, the child needs to be taught to rationalize everything and to be open to interpretations and critical thinking. There is no subject to be considered taboo, starting with the most basic knowledge of the world that they live in, and ending with deeper areas, such as morality, religion and science. Visual stimulus and response time as early as 3 months can be an indicator of verbal and performance IQ at age 4 years. When parents value ECE and its importance their children generally have a higher rate of attendance. This allows children the opportunity to build and nurture trusting relationships with educators and social relationships with peers.
By providing education in a child's most formative years, ECE also has the capacity to pre-emptively begin closing the educational achievement gap between low and high-income students before formal schooling begins. Children of low socioeconomic status (SES) often begin school already behind their higher SES peers; on average, by the time they are three, children with high SES have three times the number of words in their vocabularies as children with low SES. Participation in ECE, however, has been proven to increase high school graduation rates, improve performance on standardized tests, and reduce both grade repetition and the number of children placed in special education.
A study was conducted by the Aga Khan Development Network's Madrasa Early Childhood Programme on the impact that early childhood education had on students' performance in grade school. Looking specifically at students who attended the Madrasa Early Childhood schools (virtually all of whom came from economically disadvantaged backgrounds), the study found that they had consistently ranked in the top 20% in grade 1 classes. The study also concluded that any formal early childhood education contributed to higher levels of cognitive development in language, mathematics, and non-verbal reasoning skills.
Especially since the first wave of results from the Perry Preschool Project were published, there has been widespread consensus that the quality of early childhood education programs correlate with gains in low-income children's IQs and test scores, decreased grade retention, and lower special education rates.
Several studies have reported that children enrolled in ECE increase their IQ scores by 4–11 points by age five, while a Milwaukee study reported a 25-point gain. In addition, students who had been enrolled in the Abecedarian Project, an often-cited ECE study, scored significantly higher on reading and math tests by age fifteen than comparable students who had not participated in early childhood programs. In addition, 36% of students in the Abecedarian Preschool Study treatment group would later enroll in four-year colleges compared to 14% of those in the control group.
In 2017, researchers reported that children who participate in ECE graduate high school at significantly greater rates than those who do not. Additionally, those who participate in ECE require special education and must repeat a grade at significantly lower rates than their peers who did not receive ECE. The NIH asserts that ECE leads to higher test scores for students from preschool through age 21, improved grades in math and reading, and stronger odds that students will keep going to school and attend college.
Nathaniel Hendren and Ben Sprung-Keyser, two Harvard economists, found high Marginal Values of Public Funds (MVPFs) for investments in programs supporting the health and early education of children, particularly those that reach children from low-income families. The average MVPF for these types of initiatives is over 5, while the MVPFs for programs for adults generally range from 0.5 to 2.
Beyond benefitting societal good, ECE also significantly impacts the socioeconomic outcomes of individuals. For example, by age 26, students who had been enrolled in Chicago Child-Parent Centers were less likely to be arrested, abuse drugs, and receive food stamps; they were more likely to have high school diplomas, health insurance and full-time employment. Studies also show that ECE heightens social engagement, bolsters lifelong health, reduces the incidence of teen pregnancy, supports mental health, decreases the risk of heart disease, and lengthens lifespans.
The World Bank's 2019 World Development Report on The Changing Nature of Work identifies early childhood development programs as one of the most effective ways governments can equip children with the skills they will need to succeed in future labor markets.
According to a 2020 study in the Journal of Political Economy by Clemson University economist Jorge Luis García, Nobel laureate James J. Heckman and University of Southern California economists Duncan Ermini Leaf and María José Prados, every dollar spent on a high-quality early-childhood programs led to a return of $7.3 over the long-term.
The Perry Preschool Project
The Perry Preschool Project, which was conducted in the 1960s in Ypsilanti, Michigan, is the oldest social experiment in the field of early childhood education and has heavily influenced policy in the United States and across the globe. The experiment enrolled 128 three- and four-year-old African-American children with cognitive disadvantage from low-income families, who were then randomly assigned to treatment and control groups. The intervention for children in the treatment group included active learning preschool sessions on weekdays for 2.5 hours per day. The intervention also included weekly visits by the teachers to the homes of the children for about 1.5 hours per visit to improve parent-child interactions at home.
Initial evaluations of the Perry intervention showed that the preschool program failed to significantly boost an IQ measure. However, later evaluations that followed up the participants for more than fifty years have demonstrated the long-term economic benefits of the program, even after accounting for the small sample size of the experiment, flaws in its randomization procedure, and sample attrition. There is substantial evidence of large treatment effects on the criminal convictions of male participants, especially for violent crime, and their earnings in middle adulthood. Research points to improvements in non-cognitive skills, executive functioning, childhood home environment, and parental attachment as potential sources of the observed long-term impacts of the program. The intervention's many benefits also include improvements in late-midlife health for both male and female participants. Perry promoted educational attainment through two avenues: total years of education attained and rates of progression to a given level of education. This pattern is particularly evident for females. Treated females received less special education, progressed more quickly through grades, earned higher GPAs, and attained higher levels of education than their control group counterparts.
Research also demonstrates spillover effects of the Perry program on the children and siblings of the original participants. A study concludes, "The children of treated participants have fewer school suspensions, higher levels of education and employment, and lower levels of participation in crime, compared with the children of untreated participants. Impacts are especially pronounced for the children of male participants. These treatment effects are associated with improved childhood home environments." The study also documents beneficial impacts on the male siblings of the original participants. Evidence from the Perry Preschool Project is noteworthy because it advocates for public spending on early childhood programs as an economic investment in a society's future, rather than in the interest of social justice.
International agreements
The Universal Declaration of Human Rights (1948), the International Covenant on Economic, Social, and Cultural Rights (1976), and the Convention on the Rights of the Child (1989) have all addressed childhood education. Article 28 of the Convention on the Rights of the Child states that "States Parties recognized the right of the child to education, and with a view to achieving this right progression and on the basis of equal opportunity, they shall, in particular:
Make primary education compulsory and available free to all;
Encourage the development of different forms of secondary education, including general and vocational education, and take appropriate measures such as the introduction of free education and offering financial assistance in case of need;
Make higher education accessible to all on the basis of capacity by every appropriate means;
Make educational and vocational information and guidance available and accessible to all children;
Take measures to encourage regular attendance at schools and the reduction of drop-out rates."
The first World Conference on Early Childhood Care and Education took place in Moscow from 27 to 29 September 2010, jointly organized by UNESCO and the city of Moscow. The overarching goals of the conference are to:
Under Goal 4 of the Sustainable Development Goals, which the UN General Assembly unanimously approved in 2015, countries committed to "ensure inclusive and equitable quality education' including early childhood." Two targets related to goal 4 are "by 2030, ensure that all girls and boys have access to quality early childhood development, care and pre-primary education so that they are ready for primary education." The 'Framework for Action' adopted by UNESCO member states later in 2015 outlines how to translate this last target into practice, and encourages states to provide "at least one year of free and compulsory pre-primary education of good quality." The Sustainable Development Goals, however, are not binding international law.
It has been argued that "International law provides no effective protection of the right to pre-primary education." Just two global treaties explicitly reference education prior to primary school. The Convention on the Elimination of All Forms of Discrimination against Women requires states to ensure equality for girls "in pre-school." And in the Convention on the Protection of the Rights of All Migrant Workers and Members of Their Families, states agree that access to "public pre-school educational institutions" shall not be denied due to the parents' or child's "irregular situation with respect to stay."
Less explicitly, the Convention on the Rights of Persons with Disabilities requires that "States Parties shall ensure an inclusive education system at all levels."
In 2022, Human Rights Watch adopted a policy calling on states to make at least one year of free and compulsory, inclusive, quality pre-primary education available and accessible for all children. In doing so they advocated making one year of pre-primary education to be included as part of the minimum core of the right to education. They further called on all states to adopt a detailed plan of action for the progressive implementation of further years of pre-primary education, within a reasonable number of years to be fixed in the plan.
According to UNESCO, a preschool curriculum is one that delivers educational content through daily activities and furthers a child's physical, cognitive, and social development. Generally, preschool curricula are only recognized by governments if they are based on academic research and reviewed by peers.
Preschool for Child Rights have pioneered into preschool curricular areas and is contributing into child rights through their preschool curriculum.
Curricula in early childhood care and education
Curricula in early childhood care and education (ECCE) is the driving force behind any ECCE programme. It is 'an integral part of the engine that, together with the energy and motivation of staff, provides the momentum that makes programmes live'. It follows therefore that the quality of a programme is greatly influenced by the quality of its curriculum. In early childhood, these may be programs for children or parents, including health and nutrition interventions and prenatal programs, as well as center-based programs for children.
Barriers and challenges
Children's learning potential and outcomes are negatively affected by exposure to violence, abuse and child labour. Thus, protecting young children from violence and exploitation is part of broad educational concerns. Due to difficulties and sensitivities around the issue of measuring and monitoring child protection violations and gaps in defining, collecting and analysing appropriate indicators, data coverage in this area is scant. However, proxy indicators can be used to assess the situation. For example, ratification of relevant international conventions indicates countries' commitment to child protection. By April 2014, 194 countries had ratified the CRC3; and 179 had ratified the 1999 International Labour Organization's Convention (No. 182) concerning the elimination of the worst forms of child labour. However, many of these ratifications are yet to be given full effect through actual implementation of concrete measures. Globally, 150 million children aged 5–14 are estimated to be engaged in child labour. In conflict-affected poor countries, children are twice as likely to die before their fifth birthday compared to those in other poor countries. In industrialized countries, 4 per cent of children are physically abused each year and 10 per cent are neglected or psychologically abused.
In both developed and developing countries, children of the poor and the disadvantaged remain the least served. This exclusion persists against the evidence that the added value of early childhood care and education services are higher for them than for their more affluent counterparts, even when such services are of modest quality. While the problem is more intractable in developing countries, the developed world still does not equitably provide quality early childhood care and education services for all its children. In many European countries, children, mostly from low-income and immigrant families, do not have access to good quality early childhood care and education.
Orphan education
A lack of education during the early childhood years for orphans is a worldwide concern. Orphans are at higher risk of "missing out on schooling, living in households with less food security, and suffering from anxiety and depression." Education during these years has the potential to improve a child's "food and nutrition, health care, social welfare, and protection." This crisis is especially prevalent in Sub-Saharan Africa which has been heavily impacted by the aids epidemic. UNICEF reports that "13.3 million children (0–17 years) worldwide have lost one or both parents to AIDS. Nearly 12 million of these children live in sub-Saharan Africa." Government policies such as the Free Basic Education Policy have worked to provide education for orphan children in this area, but the quality and inclusiveness of this policy has brought criticism.
Notable early childhood educators
Fred Rogers
Charles Eugene Beatty
Friedrich Fröbel
Elizabeth Harrison
David P. Weikart
Juan Sánchez Muliterno, President of The World Association of Early Childhood Educators
Maria Montessori
Erik Erikson
Chris Pascal, founding member of the European Early Childhood Education Research Association
See also
Baby video
Bright from the Start
Compensatory education
Head Start Program
Pretend play
Men in early childhood education
Montessori education
Playwork
Preschool Curriculum
Primary education
Reading
Reggio Emilia approach
Waldorf education
References
Citations
Sources
Neaum, S. (2013). Child development for early years students and practitioners. 2nd Edition. London: Sage Publications.
External links
National Institute for Early Education Research
National Education Association
Educational stages | 0.763392 | 0.997172 | 0.761233 |
Social stratification | Social stratification refers to a society's categorization of its people into groups based on socioeconomic factors like wealth, income, race, education, ethnicity, gender, occupation, social status, or derived power (social and political). It is a hierarchy within groups that ascribe them to different levels of privileges. As such, stratification is the relative social position of persons within a social group, category, geographic region, or social unit.
In modern Western societies, social stratification is defined in terms of three social classes: an upper class, a middle class, and a lower class; in turn, each class can be subdivided into an upper-stratum, a middle-stratum, and a lower stratum. Moreover, a social stratum can be formed upon the bases of kinship, clan, tribe, or caste, or all four.
The categorization of people by social stratum occurs most clearly in complex state-based, polycentric, or feudal societies, the latter being based upon socio-economic relations among classes of nobility and classes of peasants. Whether social stratification first appeared in hunter-gatherer, tribal, and band societies or whether it began with agriculture and large-scale means of social exchange remains a matter of debate in the social sciences. Determining the structures of social stratification arises from inequalities of status among persons, therefore, the degree of social inequality determines a person's social stratum. Generally, the greater the social complexity of a society, the more social stratification exists, by way of social differentiation.
Stratification can yield various consequences. For instance, the stratification of neighborhoods based on spatial and racial factors can influence disparate access to mortgage credit.
Overview
Definition and usage
"Social stratification" is a concept used in the social sciences to describe the relative social position of persons in a given social group, category, geographical region or other social unit. It derives from the Latin strātum (plural 'strata'; parallel, horizontal layers) referring to a given society's categorization of its people into rankings of socioeconomic tiers based on factors like wealth, income, social status, occupation and power. In modern Western societies, stratification is often broadly classified into three major divisions of social class: upper class, middle class, and lower class. Each of these classes can be further subdivided into smaller classes (e.g. "upper middle"). Social strata may also be delineated on the basis of kinship ties or caste relations.
The concept of social stratification is often used and interpreted differently within specific theories. In sociology, for example, proponents of action theory have suggested that social stratification is commonly found in developed societies, wherein a dominance hierarchy may be necessary in order to maintain social order and provide a stable social structure. Conflict theories, such as Marxism, point to the inaccessibility of resources and lack of social mobility found in stratified societies. Many sociological theorists have criticized the fact that the working classes are often unlikely to advance socioeconomically while the wealthy tend to hold political power which they use to exploit the proletariat (laboring class). Talcott Parsons, an American sociologist, asserted that stability and social order are regulated, in part, by universal values. Such values are not identical with "consensus" but can indeed be an impetus for social conflict, as has been the case multiple times through history. Parsons never claimed that universal values, in and by themselves, "satisfied" the functional prerequisites of a society. Indeed, the constitution of society represents a much more complicated codification of emerging historical factors. Theorists such as Ralf Dahrendorf alternately note the tendency toward an enlarged middle-class in modern Western societies due to the necessity of an educated workforce in technological economies. Various social and political perspectives concerning globalization, such as dependency theory, suggest that these effects are due to changes in the status of workers to the third world.
Four underlying principles
Four principles are posited to underlie social stratification. First, social stratification is socially defined as a property of a society rather than individuals in that society. Second, social stratification is reproduced from generation to generation. Third, social stratification is universal (found in every society) but variable (differs across time and place). Fourth, social stratification involves not just quantitative inequality but qualitative beliefs and attitudes about social status.
Complexity
Although stratification is not limited to complex societies, all complex societies exhibit features of stratification. In any complex society, the total stock of valued goods is distributed unequally, wherein the most privileged individuals and families enjoy a disproportionate share of income, power, and other valued social resources. The term "stratification system" is sometimes used to refer to the complex social relationships and social structures that generate these observed inequalities. The key components of such systems are: (a) social-institutional processes that define certain types of goods as valuable and desirable, (b) the rules of allocation that distribute goods and resources across various positions in the division of labor (e.g., physician, farmer, 'housewife'), and (c) the social mobility processes that link individuals to positions and thereby generate unequal control over valued resources.
Social mobility
Social mobility is the movement of individuals, social groups or categories of people between the layers or within a stratification system. This movement can be intragenerational or intergenerational. Such mobility is sometimes used to classify different systems of social stratification. Open stratification systems are those that allow for mobility between, typically by placing value on the achieved status characteristics of individuals. Those societies having the highest levels of intragenerational mobility are considered to be the most open and malleable systems of stratification. Those systems in which there is little to no mobility, even on an intergenerational basis, are considered closed stratification systems. For example, in caste systems, all aspects of social status are ascribed, such that one's social position at birth persists throughout one's lifetime.
Karl Marx
In Marxist theory, the modern mode of production consists of two main economic parts: the base and the superstructure. The base encompasses the relations of production: employer–employee work conditions, the technical division of labour, and property relations. Social class, according to Marx, is determined by one's relationship to the means of production. There exist at least two classes in any class-based society: the owners of the means of production and those who sell their labor to the owners of the means of production. At times, Marx almost hints that the ruling classes seem to own the working class itself as they only have their own labor power ('wage labor') to offer the more powerful in order to survive. These relations fundamentally determine the ideas and philosophies of a society and additional classes may form as part of the superstructure. Through the ideology of the ruling class—throughout much of history, the land-owning aristocracy—false consciousness is promoted both through political and non-political institutions but also through the arts and other elements of culture. When the aristocracy falls, the bourgeoisie become the owners of the means of production in the capitalist system. Marx predicted the capitalist mode would eventually give way, through its own internal conflict, to revolutionary consciousness and the development of more egalitarian, more communist societies.
Marx also described two other classes, the petite bourgeoisie and the lumpenproletariat. The petite bourgeoisie is like a small business class that never really accumulates enough profit to become part of the bourgeoisie, or even challenge their status. The lumpenproletariat is the underclass, those with little to no social status. This includes prostitutes, street gangs, beggars, the homeless or other untouchables in a given society. Neither of these subclasses has much influence in Marx's two major classes, but it is helpful to know that Marx did recognize differences within the classes.
According to Marvin Harris and Tim Ingold, Lewis Henry Morgan's accounts of egalitarian hunter-gatherers formed part of Karl Marx' and Friedrich Engels' inspiration for communism. Morgan spoke of a situation in which people living in the same community pooled their efforts and shared the rewards of those efforts fairly equally. He called this "communism in living". But when Marx expanded on these ideas, he still emphasized an economically oriented culture, with property defining the fundamental relationships between people. Yet, issues of ownership and property are arguably less emphasized in hunter-gatherer societies. This, combined with the very different social and economic situations of hunter-gatherers may account for many of the difficulties encountered when implementing communism in industrialized states. As Ingold points out: "The notion of communism, removed from the context of domesticity and harnessed to support a project of social engineering for large-scale, industrialized states with populations of millions, eventually came to mean something quite different from what Morgan had intended: namely, a principle of redistribution that would override all ties of a personal or familial nature, and cancel out their effects."
The counter-argument to Marxist's conflict theory is the theory of structural functionalism, argued by Kingsley Davis and Wilbert Moore, which states that social inequality places a vital role in the smooth operation of a society. The Davis–Moore hypothesis argues that a position does not bring power and prestige because it draws a high income; rather, it draws a high income because it is functionally important and the available personnel is for one reason or another scarce. Most high-income jobs are difficult and require a high level of education to perform, and their compensation is a motivator in society for people to strive to achieve more.
Max Weber
Max Weber was strongly influenced by Marx's ideas but rejected the possibility of effective communism, arguing that it would require an even greater level of detrimental social control and bureaucratization than capitalist society. Moreover, Weber criticized the dialectical presumption of a proletariat revolt, maintaining it to be unlikely. Instead, he develops a three-component theory of stratification and the concept of life chances. Weber held there are more class divisions than Marx suggested, taking different concepts from both functionalist and Marxist theories to create his own system. He emphasizes the difference between class, status and power, and treats these as separate but related sources of power, each with different effects on social action. Working half a century later than Marx, Weber claims there to be four main social classes: the upper class, the white collar workers, the petite bourgeoisie, and the manual working class.
Weber derives many of his key concepts on social stratification by examining the social structure of Germany. He notes that, contrary to Marx's theories, stratification is based on more than simple ownership of capital. Weber examines how many members of the aristocracy lacked economic wealth yet had strong political power. Many wealthy families lacked prestige and power, for example, because they were Jewish. Weber introduced three independent factors that form his theory of stratification hierarchy, which are; class, status, and power:
Class: A person's economic position in a society, based on birth and individual achievement. Weber differs from Marx in that he does not see this as the supreme factor in stratification. Weber notes how corporate executives control firms they typically do not own; Marx would have placed these people in the proletariat despite their high incomes by virtue of the fact they sell their labor instead of owning capital.
Status: A person's prestige, social honor, or popularity in a society. Weber notes that political power is not rooted in capital value solely, but also in one's individual status. Poets or saints, for example, can have extensive influence on society despite few material resources.
Power: A person's ability to get their way despite the resistance of others, particularly in their ability to engage social change. For example, individuals in government jobs, such as an employee of the Federal Bureau of Investigation, or a member of the United States Congress, may hold little property or status but still wield considerable social power.
C. Wright Mills
C. Wright Mills, drawing from the theories of Vilfredo Pareto and Gaetano Mosca, contends that the imbalance of power in society derives from the complete absence of countervailing powers against corporate leaders of the power elite. Mills both incorporated and revised Marxist ideas. While he shared Marx's recognition of a dominant wealthy and powerful class, Mills believed that the source for that power lay not only in the economic realm but also in the political and military arenas. During the 1950s, Mills stated that hardly anyone knew about the power elite's existence, some individuals (including the elite themselves) denied the idea of such a group, and other people vaguely believed that a small formation of a powerful elite existed. "Some prominent individuals knew that Congress had permitted a handful of political leaders to make critical decisions about peace and war; and that two atomic bombs had been dropped on Japan in the name of the United States, but neither they nor anyone they knew had been consulted."
Mills explains that the power elite embody a privileged class whose members are able to recognize their high position within society. In order to maintain their highly exalted position within society, members of the power elite tend to marry one another, understand and accept one another, and also work together.[pp. 4–5] The most crucial aspect of the power elite's existence lays within the core of education. "Youthful upper-class members attend prominent preparatory schools, which not only open doors to such elite universities as Harvard, Yale, and Princeton but also to the universities' highly exclusive clubs. These memberships in turn pave the way to the prominent social clubs located in all major cities and serving as sites for important business contacts."[pp. 63–67] Examples of elite members who attended prestigious universities and were members of highly exclusive clubs can be seen in George W. Bush and John Kerry. Both Bush and Kerry were members of the Skull and Bones club while attending Yale University. This club includes members of some of the most powerful men of the twentieth century, all of which are forbidden to tell others about the secrets of their exclusive club. Throughout the years, the Skull and Bones club has included presidents, cabinet officers, Supreme Court justices, spies, captains of industry, and often their sons and daughters join the exclusive club, creating a social and political network like none ever seen before.
The upper class individuals who receive elite educations typically have the essential background and contacts to enter into the three branches of the power elite: The political leadership, the military circle, and the corporate elite.
The Political Leadership: Mills held that, prior to the end of World War II, leaders of corporations became more prominent within the political sphere along with a decline in central decision-making among professional politicians.
The Military Circle: During the 1950s–1960s, increasing concerns about warfare resulted in top military leaders and issues involving defense funding and military personnel training becoming a top priority within the United States. Most of the prominent politicians and corporate leaders have been strong proponents of military spending.
The Corporate Elite: Mills explains that during the 1950s, when the military emphasis was recognized, corporate leaders worked with prominent military officers who dominated the development of policies. Corporate leaders and high-ranking military officers were mutually supportive of each other.[pp. 274–276]
Mills shows that the power elite has an "inner-core" made up of individuals who are able to move from one position of institutional power to another; for example, a prominent military officer who becomes a political adviser or a powerful politician who becomes a corporate executive. "These people have more knowledge and a greater breadth of interests than their colleagues. Prominent bankers and financiers, who Mills considered 'almost professional go-betweens of economic, political, and military affairs,' are also members of the elite's inner core.[pp. 288–289]
Anthropological theories
Most if not all anthropologists dispute the "universal" nature of social stratification, holding that it is not the standard among all societies. John Gowdy (2006) writes, "Assumptions about human behaviour that members of market societies believe to be universal, that humans are naturally competitive and acquisitive, and that social stratification is natural, do not apply to many hunter-gatherer peoples. Non-stratified egalitarian or acephalous ("headless") societies exist which have little or no concept of social hierarchy, political or economic status, class, or even permanent leadership."
Kinship-orientation
Anthropologists identify egalitarian cultures as "kinship-oriented", because they appear to value social harmony more than wealth or status. These cultures are contrasted with economically oriented cultures (including states) in which status and material wealth are prized, and stratification, competition, and conflict are common. Kinship-oriented cultures actively work to prevent social hierarchies from developing because they believe that such stratification could lead to conflict and instability. Reciprocal altruism is one process by which this is accomplished.
A good example is given by Richard Borshay Lee in his account of the Khoisan, who practice "insulting the meat". Whenever a hunter makes a kill, he is ceaselessly teased and ridiculed (in a friendly, joking fashion) to prevent him from becoming too proud or egotistical. The meat itself is then distributed evenly among the entire social group, rather than kept by the hunter. The level of teasing is proportional to the size of the kill. Lee found this out when he purchased an entire cow as a gift for the group he was living with, and was teased for weeks afterward about it (since obtaining that much meat could be interpreted as showing off).
Another example is the Australian Aboriginals of Groote Eylandt and Bickerton Island, off the coast of Arnhem Land, who have arranged their entire society—spiritually and economically—around a kind of gift economy called renunciation. According to David H. Turner, in this arrangement, every person is expected to give everything of any resource they have to any other person who needs or lacks it at the time. This has the benefit of largely eliminating social problems like theft and relative poverty. However, misunderstandings obviously arise when attempting to reconcile Aboriginal renunciative economics with the competition/scarcity-oriented economics introduced to Australia by European colonists.
Variables in theory and research
The social status variables underlying social stratification are based in social perceptions and attitudes about various characteristics of persons and peoples. While many such variables cut across time and place, the relative weight placed on each variable and specific combinations of these variables will differ from place to place over time. One task of research is to identify accurate mathematical models that explain how these many variables combine to produce stratification in a given society. Grusky (2011) provides a good overview of the historical development of sociological theories of social stratification and a summary of contemporary theories and research in this field. While many of the variables that contribute to an understanding of social stratification have long been identified, models of these variables and their role in constituting social stratification are still an active topic of theory and research. In general, sociologists recognize that there are no "pure" economic variables, as social factors are integral to economic value. However, the variables posited to affect social stratification can be loosely divided into economic and other social factors.
Economic
Strictly quantitative economic variables are more useful to describing social stratification than explaining how social stratification is constituted or maintained. Income is the most common variable used to describe stratification and associated economic inequality in a society. However, the distribution of individual or household accumulation of surplus and wealth tells us more about variation in individual well-being than does income, alone. Wealth variables can also more vividly illustrate salient variations in the well-being of groups in stratified societies. Gross Domestic Product (GDP), especially per capita GDP, is sometimes used to describe economic inequality and stratification at the international or global level.
Social
Social variables, both quantitative and qualitative, typically provide the most explanatory power in causal research regarding social stratification, either as independent variables or as intervening variables. Three important social variables include gender, race, and ethnicity, which, at the least, have an intervening effect on social status and stratification in most places throughout the world. Additional variables include those that describe other ascribed and achieved characteristics such as occupation and skill levels, age, education level, education level of parents, and geographic area. Some of these variables may have both causal and intervening effects on social status and stratification. For example, absolute age may cause a low income if one is too young or too old to perform productive work. The social perception of age and its role in the workplace, which may lead to ageism, typically has an intervening effect on employment and income.
Social scientists are sometimes interested in quantifying the degree of economic stratification between different social categories, such as men and women, or workers with different levels of education. An index of stratification has been recently proposed by Zhou for this purpose.
Gender
Gender is one of the most pervasive and prevalent social characteristics which people use to make social distinctions between individuals. Gender distinctions are found in economic-, kinship- and caste-based stratification systems. Social role expectations often form along sex and gender lines. Entire societies may be classified by social scientists according to the rights and privileges afforded to men or women, especially those associated with ownership and inheritance of property. In patriarchal societies, such rights and privileges are normatively granted to men over women; in matriarchal societies, the opposite holds true. Sex- and gender-based division of labor is historically found in the annals of most societies and such divisions increased with the advent of industrialization. Sex-based wage discrimination exists in some societies such that men, typically, receive higher wages than women for the same type of work. Other differences in employment between men and women lead to an overall gender-based pay-gap in many societies, where women as a category earn less than men due to the types of jobs which women are offered and take, as well as to differences in the number of hours worked by women. These and other gender-related values affect the distribution of income, wealth, and property in a given social order.
Race
Racism consists of both prejudice and discrimination based in social perceptions of observable biological differences between peoples. It often takes the form of social actions, practices or beliefs, or political systems in which different races are perceived to be ranked as inherently superior or inferior to each other, based on presumed shared inheritable traits, abilities, or qualities. In a given society, those who share racial characteristics socially perceived as undesirable are typically under-represented in positions of social power, i.e., they become a minority category in that society. Minority members in such a society are often subjected to discriminatory actions resulting from majority policies, including assimilation, exclusion, oppression, expulsion, and extermination. Overt racism usually feeds directly into a stratification system through its effect on social status. For example, members associated with a particular race may be assigned a slave status, a form of oppression in which the majority refuses to grant basic rights to a minority that are granted to other members of the society. More covert racism, such as that which many scholars posit is practiced in more contemporary societies, is socially hidden and less easily detectable. Covert racism often feeds into stratification systems as an intervening variable affecting income, educational opportunities, and housing. Both overt and covert racism can take the form of structural inequality in a society in which racism has become institutionalized.
Ethnicity
Ethnic prejudice and discrimination operate much the same as do racial prejudice and discrimination in society. In fact, only recently have scholars begun to differentiate race and ethnicity; historically, the two were considered to be identical or closely related. With the scientific development of genetics and the human genome as fields of study, most scholars now recognize that race is socially defined on the basis of biologically determined characteristics that can be observed within a society while ethnicity is defined on the basis of culturally learned behavior. Ethnic identification can include shared cultural heritage such as language and dialect, symbolic systems, religion, mythology and cuisine. As with race, ethnic categories of persons may be socially defined as minority categories whose members are under-represented in positions of social power. As such, ethnic categories of persons can be subject to the same types of majority policies. Whether ethnicity feeds into a stratification system as a direct, causal factor or as an intervening variable may depend on the level of ethnographic centrism within each of the various ethnic populations in a society, the amount of conflict over scarce resources, and the relative social power held within each ethnic category.
Global stratification
Globalizing forces lead to rapid international integration arising from the interchange of world views, products, ideas, and other aspects of culture. Advances in transportation and telecommunications infrastructure, including the rise of the telegraph and its modern representation the Internet, are major factors in globalization, generating further interdependence of economic and cultural activities.
Like a stratified class system within a nation, looking at the world economy one can see class positions in the unequal distribution of capital and other resources between nations. Rather than having separate national economies, nations are considered as participating in this world economy. The world economy manifests a global division of labor with three overarching classes: core countries, semi-periphery countries and periphery countries, according to World-systems and Dependency theories. Core nations primarily own and control the major means of production in the world and perform the higher-level production tasks and provide international financial services. Periphery nations own very little of the world's means of production (even when factories are located in periphery nations) and provide low to non-skilled labor. Semiperipheral nations are midway between the core and periphery. They tend to be countries moving towards industrialization and more diversified economies.
Core nations receive the greatest share of surplus production, and periphery nations receive the least. Furthermore, core nations are usually able to purchase raw materials and other goods from noncore nations at low prices, while demanding higher prices for their exports to noncore nations. A global workforce employed through a system of global labor arbitrage ensures that companies in core countries can utilize the cheapest semi-and non-skilled labor for production.
Today we have the means to gather and analyze data from economies across the globe. Although many societies worldwide have made great strides toward more equality between differing geographic regions, in terms of the standard of living and life chances afforded to their peoples, we still find large gaps between the wealthiest and the poorest within a nation and between the wealthiest and poorest nations of the world. A January 2014 Oxfam report indicates that the 85 wealthiest individuals in the world have a combined wealth equal to that of the bottom 50% of the world's population, or about 3.5 billion people. By contrast, for 2012, the World Bank reports that 21 percent of people worldwide, around 1.5 billion, live in extreme poverty, at or below $1.25 a day. Zygmunt Bauman has provocatively observed that the rise of the rich is linked to their capacity to lead highly mobile lives: "Mobility climbs to the rank of the uppermost among coveted values—and the freedom to move, perpetually a scarce and unequally distributed commodity, fast becomes the main stratifying factor of our late modern or postmodern time."
See also
Age stratification
Caste system
Class stratification
Cultural hegemony
Dominance hierarchy
Egalitarianism
Elite theory
Elitism
Gini coefficient
Globalization
Intersectionality
Marxism
Microinequity
Rankism
Religious stratification
Social class
Social inequality
Socioeconomic status
Social justice
Systems of social stratification
The Power Elite
References
Further reading
Anthropological categories of peoples
Social classes
Social inequality
Social status
Conflict theory
Economic problems
Urban anthropology
de:Soziale Schicht | 0.762213 | 0.998705 | 0.761226 |
Social conditioning | Social conditioning is the sociological process of training individuals in a society to respond in a manner generally approved by the society in general and peer groups within society. The concept is stronger than that of socialization, which is the process of inheriting norms, customs and ideologies. Manifestations of social conditioning are vast, but they are generally categorized as social patterns and social structures including nationalism, education, employment, entertainment, popular culture, religion, spirituality and family life. The social structure in which an individual finds him or herself influences and can determine their social actions and responses.
Social conditioning represents the environment and personal experience in the nature and nurture debate. Society in general and peer groups within society set the norms which shape the behavior of actors within the social system. Though society shapes individuals; however, it was the individual who made society to begin with and society in turn shaped and influenced us. Emile Durkheim who really played an important role in the theory of social facts, explained and talked how what was once a mere idea which in this case Durkheim is talking about society has turned out to be a thing which basically controls and dictates us.
Socialization
Social conditioning is directly related to the particular culture that one is involved in. In You May Ask Yourself, Dalton Conley, a professor of sociology at New York University, states that "culture affects us. It's transmitted to us through different processes, with socialization—our internalization of society's values, beliefs and norms—being the main one." The particular manner or influence that one is exposed to is associated by the herd that he or she is involved in. Social conditioning bases its principals on the natural need for an animal to be a part of a pack.
Herd Instinct
Sigmund Freud, known as the father of psychoanalysis, recorded his observations of group dynamics in Group Psychology and the Analysis of the Ego. In his work, he refers to Wilfred Trotter as the group conditions its members, Freud states "opposition to the herd is as good as separation from it, and is therefore anxiously avoided". Such fear causes the individual members and even leaders of a particular group to go along with the decisions a group based in accordance to its culture. On a micro scale, the individual is conditioned to partake in the social norms of the said group even if they contradict his or her personal moral code. The consequences of such protest (may) result in isolation. Such, in accordance to Freud, is one of the greatest punishments than can be instilled on an individual. This would result in the inability of an individual to practice his or her "instinctual impulses". These instincts, in accordance to Freud, are the motives behind actions that the individual may take. The father of psychoanalysis further states that, "we thus have an impression of a state in which an individual's private emotional impulses and intellectual acts are too weak to come to anything by themselves and are entirely dependent for this on being reinforced by being repeated in a similar way in the other members of the group". Out of fear of isolation and to secure the practice of instinctual impulses, there may be little protest from individual members as the group continues to conditions.
Propaganda
Edward Bernays, Freud's nephew and the father of propaganda and public relations, used many of his uncle's theories in order to create new methods in marketing. In Propaganda, he published that "If we understand the mechanism and motives of the group mind, it is now possible to control and regiment the masses according to our will without them knowing it". He used the herd theory in order to create public relations, thus conditioning the public to need particular goods from certain manufacturers. In the same publication he stated, "A single factory, potentially capable of supplying a whole continent with its particular product, cannot afford to wait until the public asks for its product; it must maintain constant touch, through advertising and propaganda, with the vast public in order to assure itself the continuous demand which alone will make its costly plant profitable." His theories and applications in social conditioning continue throughout his work.
Bernays and the elite
Bernays continued the application of his work as he associates the method in which a minority elite use social conditioning to assert their dominance and will power. In You May Ask Yourself, Dalton Conley describes this ideal with hegemony. He states that the term "refers to a historical process in which a dominant group exercises 'moral and intellectual leadership' throughout society by winning the voluntary 'consent' of popular masses." Bernays believed that this was a functionalist approach. Stating "vast numbers of human beings must cooperate in this manner if they are to live together as a smoothly functioning society ...In almost every act of our daily lives, whether in the sphere of politics or business, in our social conduct or our ethical thinking, we are dominated by the relatively small number of persons...who understand the mental processes and social patterns of the masses." Such influence is made possible by persistent repetition. Wilbert E. Moore, a formal Princeton University Sociology professor, in Social Change, states that "the persistence of patterns gives order and constancy to recurrent events. In terms of behavior, many elements of persistence are more nearly cyclical, the near repetition of sequences of action over various time periods." He continues to state that "role structures (and this norms) grow out of the need for predictability". While he does state that there are several reasons for group formation (spontaneous, deliberate and coercive) the group usually winds up 'repeating sequences' and then, in accordance to Freud and Bernays, contribute to the socialization of possibly new members.
Classical conditioning – Ivan Pavlov and behaviorism
Such repetition contributes to basic social conditioning. Ivan Pavlov demonstrated this theory with his infamous conditioned stimuli experiment. In Pavlov's dog experiment, the research proved that repeated exposure to a particular stimuli results in a specific behavior being repeated. In accordance to Mark Bouton of the University of Vermont, the strength of such 'repetition' and influence can be seen in operant conditioning. Where, depending on reinforcement and punishment of a particular behavior, a response is conditioned.
Methods of social conditioning – media
In accordance to Ashley Lutz, an editor of Business Insider, 90% of the media, in 2011, was owned by merely six companies. Such limits the exposure to information, at least the perspective on information. The limited exposure to the perspectives of information results in increase of particular social conditioning. Through repetition of a particular perspective of an ideal, the view is reinforced into the audience and results in a formed social norm. This contributes to the formation of a reflection of the culture in media. Conley states that "culture is a projection of social structures and relationships into the public sphere, a screen onto which the film of the underlying reality or social structure of our society is shown". Such cycling repetition creates a method of socialization and a manner in which society further molds its current members or new ones into the culture.
Labeling theory
Social control and stigmatization (SCS)
Conley states that "individuals subconsciously notice how others see or label them, and their reactions to these labels over time form the basis of their self-identity. It is only through the social process of labeling that we create deviance by assigning shared meanings to acts." Social conditioning is formed by the creation of 'good' and 'bad' behaviors - persistent reinforcement and the use of operant conditioning influences individuals/groups to develop particular behaviors and/or ideals. In "A Differential Association—Reinforcement Theory of Criminal Behavior", from Criminological Theory Readings and Retrospectives, social norms and deviance in a particular group is described as follows: "We often infer what the norms of a group are by observing reaction to behavior, i.e., the sanctions applied to, or reinforcement and punishment of, such behavior. We may also learn what a group's norms are through verbal and written statements. The individual group member also learns what is and is not acceptable behavior on the basis of verbal statements made by others, as well as through sanctions (i.e. the reinforcing or aversive stimuli) applied by others in response to his behavior and that of other norms violators."
A particular group conditions its members into certain behaviors. In Juvenile Delinquency and Urban Areas, the authors note that even illegal behaviors may be seen as positive and promoted within a particular group because different social organizations have a varying amount of influence over particular members – in particular, as children age, their friends play a greater amount of influence than the family. Burgess and Akers further reinforce this point: "In terms of our analysis, the primary group would be seen to be the major source of an individual's social reinforcements. The bulk of behavioral training which the child receives occurs at a time when the trainers, usually the parents, possess a very powerful system of reinforcement. In fact, we might characterize a primary group as a generalized reinforce (one associated with many reinforces, conditioned as well as unconditioned). And, as we suggest above, as the child grows older, groups other than the family may come to control a majority of an individual's reinforces, e.g. the adolescent peer group. Such theories are further backed up by Mead's theory of Social Development and are reinforced by stigmatization."
Mead's theory of social development
In accordance to Margaret Mead, one's identity is shaped by outside forces. While the self exists on its own at birth, the first interactions influence the development of one's identity. With the introductions of more and more groups, starting with the significant other (ex. family) and reference groups (ex. friends) an individual develops his or her perception of self. As Conley states, individuals "...develop a sense of other, that is, someone or something outside of oneself". Finally, individuals interact with the generalized other, "which represents an internalized sense of the total expectations of others in a variety of settings—regardless of whether we're encountered those people or places before".
Stigma
"A stigma is a negative social label that not only changes others' behaviour towards a person but, also alters that person's self-concept and social identity." Once placed into such a category, an individual finds it nearly impossible to move out of that particular grouping. Such becomes his or her master status, overshadowing any other statuses. Such conditions the individual to continuously partake in the activities ascribed to the master status, good or bad.
See also
Brave New World
Operant conditioning
Peer pressure
Social theory
Political correctness
References
Sociological terminology | 0.779942 | 0.976001 | 0.761224 |
Feasibility study | A feasibility study is an assessment of the practicality of a project or system. A feasibility study aims to objectively and rationally uncover the strengths and weaknesses of an existing business or proposed venture, opportunities and threats present in the natural environment, the resources required to carry through, and ultimately the prospects for success. In its simplest terms, the two criteria to judge feasibility are cost required and value to be attained.
A well-designed feasibility study should provide a historical background of the business or project, a description of the product or service, accounting statements, details of the operations and management, marketing research and policies, financial data, legal requirements and tax obligations. Generally, feasibility studies precede technical development and project implementation. A feasibility study evaluates the project's potential for success; therefore, perceived objectivity is an important factor in the credibility of the study for potential investors and lending institutions. It must therefore be conducted with an objective, unbiased approach to provide information upon which decisions can be based.
Formal definition
A project feasibility study is a comprehensive report that examines in detail the five frames of analysis of a given project. It also takes into consideration its four Ps, its risks and POVs, and its constraints (calendar, costs, and norms of quality). The goal is to determine whether the project should go ahead, be redesigned, or else abandoned altogether.
The five frames of analysis are:
The frame of definition;
the frame of contextual risks;
the frame of potentiality;
the parametric frame;
the frame of dominant and contingency strategies.
The four Ps are traditionally defined as Plan, Processes, People, and Power. The risks are considered to be external to the project (e.g., weather conditions) and are divided in eight categories: (Plan) financial and organizational (e.g., government structure for a private project); (Processes) environmental and technological; (People) marketing and sociocultural; and (Power) legal and political. POVs are Points of Vulnerability: they differ from risks in the sense that they are internal to the project and can be controlled or else eliminated.
The constraints are the standard constraints of calendar, costs and norms of quality that can each be objectively determined and measured along the entire project lifecycle. Depending on projects, portions of the study may suffice to produce a feasibility study; smaller projects, for example, may not require an exhaustive environmental assessment.
Common factors
TELOS is an acronym in project management used to define five areas of feasibility that determine whether a project should run or not.
T - Technical — Is the project technically possible?
E - Economic — Can the project be afforded? Will it increase profit?
L - Legal — Is the project legal?
O - Operational — How will the current operations support the change?
S - Scheduling — Can the project be done in time?
Technical feasibility
This assessment is based on an outline design of system requirements, to determine whether the company has the technical expertise to handle completion of the project. When writing a feasibility report, the following should be taken to consideration:
A brief description of the business to assess more possible factors which could affect the study
The part of the business being examined
The human and economic factor
The possible solutions to the problem
At this level, the concern is whether the proposal is both technically and legally feasible (assuming moderate cost).
The technical feasibility assessment is focused on gaining an understanding of the present technical resources of the organization and their applicability to the expected needs of the proposed system. It is an evaluation of the hardware and software and how it meets the need of the proposed system
Method of production
The selection among a number of methods to produce the same commodity should be undertaken first. Factors that make one method being preferred to other method in agricultural projects are the following:
Availability of inputs or raw materials and their quality and prices.
Availability of markets for outputs of each method and the expected prices for these outputs.
Various efficiency factors such as the expected increase in one additional unit of fertilizer or productivity of a specified crop per one thing
Production technique
After we determine the appropriate method of production of a commodity, it is necessary to look for the optimal technique to produce this commodity.
Project requirements
Once the method of production and its technique are determined, technical people have to determine the projects' requirements during the investment and operating periods. These include:
Determination of tools and equipment needed for the project such as drinkers and feeders or pumps or pipes ...etc.
Determination of projects' requirements of constructions such as buildings, storage, and roads ...etc. in addition to internal designs for these requirements.
Determination of projects' requirements of skilled and unskilled labor and managerial and financial labor.
Determination of construction period concerning the costs of designs and consultations and the costs of constructions and other tools.
Determination of minimum storage of inputs, cash money to cope with operating and contingency costs.
Project location
The most important factors that determine the selection of project location are the following:
Availability of land (proper acreage and reasonable costs).
The impact of the project on the environment and the approval of the concerned institutions for license.
The costs of transporting inputs and outputs to the project's location (i.e., the distance from the markets).
Availability of various services related to the project such as availability of extension services or veterinary or water or electricity or good roads ...etc.
Legal feasibility
It determines whether the proposed system conflicts with legal requirements, e.g., a data processing system must comply with the local data protection regulations and if the proposed venture is acceptable in accordance to the laws of the land.
Operational feasibility study
Operational feasibility is the measure of how well a proposed system solves problems and takes advantage of the opportunities identified during scope definition and how it satisfies the requirements identified in the requirements analysis phase of system development.
The operational feasibility assessment focuses on the degree to which the proposed development project fits in with the existing business environment and objectives about the development schedule, delivery date, corporate culture and existing business processes.
To ensure success, desired operational outcomes must be imparted during design and development. These include such design-dependent parameters as reliability, maintainability, supportability, usability, producibility, disposability, sustainability, affordability, etc. These parameters are required to be considered at the early stages of the design if desired operational behaviours are to be realised. A system design and development requires appropriate and timely application of engineering and management efforts to meet the previously mentioned parameters. A system may serve its intended purpose most effectively when its technical and operating characteristics are engineered into the design. Therefore, operational feasibility is a critical aspect of systems engineering that must be integral to the early design phases.
Time feasibility
A time feasibility study will take into account the period in which the project is going to take up to its completion. A project will fail if it takes too long to be completed before it is useful. Typically this means estimating how long the system will take to develop, and if it can be completed in a given time period using some methods like payback period. Time feasibility is a measure of how reasonable the project timetable is. Given our technical expertise, are the project deadlines reasonable? Some projects are initiated with specific deadlines. It is necessary to determine whether the deadlines are mandatory or desirable.
Other feasibility factors
Resource feasibility
Describe how much time is available to build the new system, when it can be built, whether it interferes with normal business operations, type and amount of resources required, dependencies, and developmental procedures with company revenue prospectus.
Financial feasibility
In case of a new project, financial viability can be judged on the following parameters:
Total estimated cost of the project
Financing of the project in terms of its capital structure, debt to equity ratio and promoter's share of total cost
Existing investment by the promoter in any other business
Projected cash flow and profitability
The financial viability of a project should provide the following information:
Full details of the assets to be financed and how liquid those assets are.
Rate of conversion to cash-liquidity (i.e., how easily the various assets can be converted to cash).
Project's funding potential and repayment terms.
Sensitivity in the repayments capability to the following factors:
Mild slowing of sales.
Acute reduction/slowing of sales.
Small increase in cost.
Large increase in cost.
Adverse economic conditions.
In 1983 the first generation of the Computer Model for Feasibility Analysis and Reporting (COMFAR), a computation tool for financial analysis of investments, was released. Since then, this United Nations Industrial Development Organization (UNIDO) software has been developed to also support the economic appraisal of projects.
The COMFAR III Expert is intended as an aid in the analysis of investment projects. The main module of the program accepts financial and economic data, produces financial and economic statements and graphical displays and calculates measures of performance. Supplementary modules assist in the analytical process. Cost-benefit and value-added methods of economic analysis developed by UNIDO are included in the program and the methods of major international development institutions are accommodated. The program is applicable for the analysis of investment in new projects and expansion or rehabilitation of existing enterprises as, e.g., in the case of reprivatisation projects. For joint ventures, the financial perspective of each partner or class of shareholder can be developed. Analysis can be performed under a variety of assumptions concerning inflation, currency revaluation and price escalations.
Market research
Market research studies is one of the most important sections of the feasibility study as it examines the marketability of the product or service and convinces readers that there is a potential market for the product or service. If a significant market for the product or services cannot be established, then there is no project. Typically, market studies will assess the potential sales of the product, absorption and market capture rates and the project's timing. The feasibility study outputs the feasibility study report, a report detailing the evaluation criteria, the study findings, and the recommendations.
See also
Project appraisal
Environmental impact
Mining feasibility study
Proof of concept
SWOT analysis
References
Further reading
Matson, James. "Cooperative Feasibility Study Guide" , United States Department of Agriculture, Rural Business-Cooperative Service. October 2000.
https://pilotandfeasibilitystudies.qmul.ac.uk/
External links
Hoagland & Williamson 2000
United Nations Industrial Development Organization (UNIDO)
Matson
Allan Thompson 2003
Business process management
Evaluation methods
Project management | 0.765472 | 0.994424 | 0.761204 |
Interpersonal relationship | In social psychology, an interpersonal relation (or interpersonal relationship) describes a social association, connection, or affiliation between two or more persons. It overlaps significantly with the concept of social relations, which are the fundamental unit of analysis within the social sciences. Relations vary in degrees of intimacy, self-disclosure, duration, reciprocity, and power distribution. The main themes or trends of the interpersonal relations are: family, kinship, friendship, love, marriage, business, employment, clubs, neighborhoods, ethical values, support and solidarity. Interpersonal relations may be regulated by law, custom, or mutual agreement, and form the basis of social groups and societies. They appear when people communicate or act with each other within specific social contexts, and they thrive on equitable and reciprocal compromises.
Interdisciplinary analysis of relationships draws heavily upon the other social sciences, including, but not limited to: anthropology, linguistics, sociology, economics, political science, communication, mathematics, social work, and cultural studies. This scientific analysis had evolved during the 1990s and has become "relationship science", through the research done by Ellen Berscheid and Elaine Hatfield. This interdisciplinary science attempts to provide evidence-based conclusions through the use of data analysis.
Types
Intimate relationships
Romantic relationships
Romantic relationships have been defined in countless ways, by writers, philosophers, religions, scientists, and in the modern day, relationship counselors. Two popular definitions of love are Sternberg's Triangular Theory of Love and Fisher's theory of love. Sternberg defines love in terms of intimacy, passion, and commitment, which he claims exist in varying levels in different romantic relationships. Fisher defines love as composed of three stages: attraction, romantic love, and attachment. Romantic relationships may exist between two people of any gender, or among a group of people, as in polyamory.
On the basis of openness, all romantic relationships are of 2 types: open and closed. Closed relationships are strictly against romantic or sexual activity of partners with anyone else outside the relationships. In an open relationship, all partners remain committed to each other, but allow themselves and their partner to have relationships with others.
On the basis of number of partners, they are of 2 types: monoamorous and polyamorous. A monoamorous relationship is between only two individuals. A polyamorous relationship is among three or more individuals.
Romance
While many individuals recognize the single defining quality of a romantic relationship as the presence of love, it is impossible for romantic relationships to survive without the component of interpersonal communication. Within romantic relationships, love is therefore equally difficult to define. Hazan and Shaver define love, using Ainsworth's attachment theory, as comprising proximity, emotional support, self-exploration, and separation distress when parted from the loved one. Other components commonly agreed to be necessary for love are physical attraction, similarity, reciprocity, and self-disclosure.
Life stages
Early adolescent relationships are characterized by companionship, reciprocity, and sexual experiences. As emerging adults mature, they begin to develop attachment and caring qualities in their relationships, including love, bonding, security, and support for partners. Earlier relationships also tend to be shorter and exhibit greater involvement with social networks. Later relationships are often marked by shrinking social networks, as the couple dedicates more time to each other than to associates. Later relationships also tend to exhibit higher levels of commitment.
Most psychologists and relationship counselors predict a decline of intimacy and passion over time, replaced by a greater emphasis on companionate love (differing from adolescent companionate love in the caring, committed, and partner-focused qualities). However, couple studies have found no decline in intimacy nor in the importance of sex, intimacy, and passionate love to those in longer or later-life relationships. Older people tend to be more satisfied in their relationships, but face greater barriers to entering new relationships than do younger or middle-aged people. Older women in particular face social, demographic, and personal barriers; men aged 65 and older are nearly twice as likely as women to be married, and widowers are nearly three times as likely to be dating 18 months following their partner's loss compared to widows.
Significant other
The term significant other gained popularity during the 1990s, reflecting the growing acceptance of 'non-heteronormative' relationships. It can be used to avoid making an assumption about the gender or relational status (e.g. married, cohabitating, civil union) of a person's intimate partner. Cohabiting relationships continue to rise, with many partners considering cohabitation to be nearly as serious as, or a substitute for, marriage. In particular, LGBTQ people often face unique challenges in establishing and maintaining intimate relationships. The strain of internalized discrimination, socially ingrained or homophobia, transphobia and other forms of discrimination against LGBTQ+ people, and social pressure of presenting themselves in line with socially acceptable gender norms can affect their health, quality of life, satisfaction, emotions etc. inside and outside their relationships. LGBTQ youth also lack the social support and peer connections enjoyed by hetero-normative young people. Nonetheless, comparative studies of homosexual and heterosexual couples have found few differences in relationship intensity, quality, satisfaction, or commitment.
Marital relationship
Although nontraditional relationships continue to rise, marriage still makes up the majority of relationships except among emerging adults. It is also still considered by many to occupy a place of greater importance among family and social structures.
Family relationships
Parentchild
In ancient times, parentchild relationships were often marked by fear, either of rebellion or abandonment, resulting in the strict filial roles in, for example, ancient Rome and China. Freud conceived of the Oedipal complex, the supposed obsession that young boys have towards their mothers and the accompanying fear and rivalry with their fathers, and the Electra complex, in which the young girl feels that her mother has castrated her and therefore becomes obsessed with her father. Freud's ideas influenced thought on parentchild relationships for decades.
Another early conception of parent–child relationships was that love only existed as a biological drive for survival and comfort on the child's part. In 1958, however, Harry Harlow's study " The Hot Wire Mother'' comparing rhesus' reactions to wire surrogate "mothers" and cloth "mothers" demonstrated that affection was wanted by any caregiver and not only the surrogate mothers.
The study laid the groundwork for Mary Ainsworth's attachment theory, showing how the infants used their cloth "mothers" as a secure base from which to explore. In a series of studies using the strange situation, a scenario in which an infant is separated from then reunited with the parent, Ainsworth defined three styles of parent-child relationship.
Securely attached infants miss the parent, greet them happily upon return, and show normal exploration and lack of fear when the parent is present.
Insecure avoidant infants show little distress upon separation and ignore the caregiver when they return. They explore little when the parent is present. Infants also tend to be emotionally unavailable.
Insecure ambivalent infants are highly distressed by separation, but continue to be distressed upon the parent's return; these infants also explore little and display fear even when the parent is present.
Some psychologists have suggested a fourth attachment style, disorganized, so called because the infants' behavior appeared disorganized or disoriented.
Secure attachments are linked to better social and academic outcomes and greater moral internalization as research proposes the idea that parent-child relationships play a key role in the developing morality of young children. Secure attachments are also linked to less delinquency for children, and have been found to predict later relationship success.
For most of the late nineteenth through the twentieth century, the perception of adolescent-parent relationships was that of a time of upheaval. G. Stanley Hall popularized the "Sturm und drang", or storm and stress, model of adolescence. Psychological research has painted a much tamer picture. Although adolescents are more risk-seeking and emerging adults have higher suicide rates, they are largely less volatile and have much better relationships with their parents than the storm and stress model would suggest Early adolescence often marks a decline in parent-child relationship quality, which then re-stabilizes through adolescence, and relationships are sometimes better in late adolescence than prior to its onset. With the increasing average age at marriage and more youths attending college and living with parents past their teens, the concept of a new period called emerging adulthood gained popularity. This is considered a period of uncertainty and experimentation between adolescence and adulthood. During this stage, interpersonal relationships are considered to be more self-focused, and relationships with parents may still be influential.
Siblings
Sibling relationships have a profound effect on social, psychological, emotional, and academic outcomes. Although proximity and contact usually decreases over time, sibling bonds continue to have effect throughout their lives. Sibling bonds are one of few enduring relationships humans may experience. Sibling relationships are affected by parent-child relationships, such that sibling relationships in childhood often reflect the positive or negative aspects of children's relationships with their parents.
Other examples of interpersonal relationship
Egalitarian and platonic friendship
Enemy
Frenemy — a person with whom an individual maintains a friendly interaction despite underlying conflict, possibly encompassing rivalry, mistrust, jealousy or competition
Neighbor
Familiar stranger
Official
Queerplatonic relationship
Business is generally held to be distinct from personal relations, a contrasting mode which other than excursions from the norm is based on non-personal interest and rational rather than emotional concerns.
Business relationships
Partnership
Employer and employee
Contractor
Customer
Landlord and tenant
Co-worker
Ways that interpersonal relationships begin
Proximity:
Proximity increases the chance of repeated exposure to the same person. Long-term exposure that can develop familiarity is more likely to trigger like or hate.
Technological advance:
The Internet removes the problem of lack of communication due to long distance. People can communicate with others who live far away from them through video calls or text. Internet is a medium for people to be close to others who are not physically near them.
Similarity:
People prefer to make friends with others who are similar to them because their thoughts and feelings are more likely to be understood.
Stages
Interpersonal relationships are dynamic systems that change continuously during their existence. Like living organisms, relationships have a beginning, a lifespan, and an end. They tend to grow and improve gradually, as people get to know each other and become closer emotionally, or they gradually deteriorate as people drift apart, move on with their lives and form new relationships with others. One of the most influential models of relationship development was proposed by psychologist George Levinger. This model was formulated to describe heterosexual, adult romantic relationships, but it has been applied to other kinds of interpersonal relations as well. According to the model, the natural development of a relationship follows five stages:
Acquaintance and acquaintanceship – Becoming acquainted depends on previous relationships, physical proximity, first impressions, and a variety of other factors. If two people begin to like each other, continued interactions may lead to the next stage, but acquaintance can continue indefinitely. Another example is the association.
Buildup – During this stage, people begin to trust and care about each other. The need for intimacy, compatibility and such filtering agents as common background and goals will influence whether or not interaction continues.
Continuation – This stage follows a mutual commitment to quite a strong and close long-term friendship, romantic relationship, or even marriage. It is generally a long, relatively stable period. Nevertheless, continued growth and development will occur during this time. Mutual trust is important for sustaining the relationship.
Deterioration – Not all relationships deteriorate, but those that do tend to show signs of trouble. Boredom, resentment, and dissatisfaction may occur, and individuals may communicate less and avoid self-disclosure. Loss of trust and betrayals may take place as the downward spiral continues, eventually ending the relationship. (Alternately, the participants may find some way to resolve the problems and reestablish trust and belief in others.)
Ending – The final stage marks the end of the relationship, either by breakups, death or by spatial separation for quite some time and severing all existing ties of either friendship or romantic love.
Terminating a relationship
According to the latest Systematic Review of the Economic Literature on the Factors associated with Life Satisfaction (dating from 2007), stable and secure relationships are beneficial, and correspondingly, relationship dissolution is harmful.
The American Psychological Association has summarized the evidence on breakups. Breaking up can actually be a positive experience when the relationship did not expand the self and when the breakup leads to personal growth. They also recommend some ways to cope with the experience:
Purposefully focusing on the positive aspects of the breakup ("factors leading up to the break-up, the actual break-up, and the time right after the break-up")
Minimizing the negative emotions
Journaling the positive aspects of the breakup (e.g. "comfort, confidence, empowerment, energy, happiness, optimism, relief, satisfaction, thankfulness, and wisdom"). This exercise works best, although not exclusively, when the breakup is mutual.
Less time between a breakup and a subsequent relationship predicts higher self-esteem, attachment security, emotional stability, respect for your new partner, and greater well-being. Furthermore, rebound relationships do not last any shorter than regular relationships. 60% of people are friends with one or more ex. 60% of people have had an off-and-on relationship. 37% of cohabiting couples, and 23% of the married, have broken up and gotten back together with their existing partner.
Terminating a marital relationship implies divorce or annulment. One reason cited for divorce is infidelity. The determinants of unfaithfulness are debated by dating service providers, feminists, academics, and science communicators. According to Psychology Today, women's, rather than men's, level of commitment more strongly determines if a relationship will continue.
Pathological relationships
Research conducted in Iran and other countries has shown that conflicts are common between couples, and, in Iran, 92% of the respondents reported that they had conflicts in their marriages. These conflicts can cause major problems for couples and they are caused due to multiple reasons.
Abusive
Abusive relationships involve either maltreatment or violence such as physical abuse, physical neglect, sexual abuse, and emotional maltreatment. Abusive relationships within the family are very prevalent in the United States and usually involve women or children as victims. Common individual factors for abusers include low self-esteem, poor impulse control, external locus of control, drug use, alcohol abuse, and negative affectivity. There are also external factors such as stress, poverty, and loss which contribute to likelihood of abuse.
Codependent
Codependency initially focused on a codependent partner enabling substance abuse, but it has become more broadly defined to describe a dysfunctional relationship with extreme dependence on or preoccupation with another person. There are some who even refer to codependency as an addiction to the relationship. The focus of codependents tends to be on the emotional state, behavioral choices, thoughts, and beliefs of another person. Often those who are codependent neglect themselves in favor of taking care of others and have difficulty fully developing an identity of their own.
Narcissistic
Narcissists focus on themselves and often distance themselves from intimate relationships; the focus of narcissistic interpersonal relationships is to promote one's self-concept. Generally, narcissists show less empathy in relationships and view love pragmatically or as a game involving others' emotions.
Narcissists are usually part of the personality disorder, narcissistic personality disorder (NPD). In relationships, they tend to affect the other person as they attempt to use them to enhance their self-esteem. Specific types of NPD make a person incapable of having an interpersonal relationship due to their being cunning, envious, and contemptuous.
Importance
Human beings are innately social and are shaped by their experiences with others. There are multiple perspectives to understand this inherent motivation to interact with others.
Need to belong
According to Maslow's hierarchy of needs, humans need to feel love (sexual/nonsexual) and acceptance from social groups (family, peer groups). In fact, the need to belong is so innately ingrained that it may be strong enough to overcome physiological and safety needs, such as children's attachment to abusive parents or staying in abusive romantic relationships. Such examples illustrate the extent to which the psychobiological drive to belong is entrenched.
Social exchange
Another way to appreciate the importance of relationships is in terms of a reward framework. This perspective suggests that individuals engage in relations that are rewarding in both tangible and intangible ways. The concept fits into a larger theory of social exchange. This theory is based on the idea that relationships develop as a result of cost–benefit analysis. Individuals seek out rewards in interactions with others and are willing to pay a cost for said rewards. In the best-case scenario, rewards will exceed costs, producing a net gain. This can lead to "shopping around" or constantly comparing alternatives to maximize the benefits or rewards while minimizing costs.
Relational self
Relationships are also important for their ability to help individuals develop a sense of self. The relational self is the part of an individual's self-concept that consists of the feelings and beliefs that one has regarding oneself that develops based on interactions with others. In other words, one's emotions and behaviors are shaped by prior relationships. Relational self theory posits that prior and existing relationships influence one's emotions and behaviors in interactions with new individuals, particularly those individuals that remind them of others in their life. Studies have shown that exposure to someone who resembles a significant other activates specific self-beliefs, changing how one thinks about oneself in the moment more so than exposure to someone who does not resemble one's significant other.
Power and dominance
Power is the ability to influence the behavior of other people. When two parties have or assert unequal levels of power, one is termed "dominant" and the other "submissive". Expressions of dominance can communicate an intention to assert or maintain dominance in a relationship. Being submissive can be beneficial because it saves time, limits emotional stress, and may avoid hostile actions such as withholding of resources, cessation of cooperation, termination of the relationship, maintaining a grudge, or even physical violence. Submission occurs in different degrees; for example, some employees may follow orders without question, whereas others might express disagreement but concede when pressed.
Groups of people can form a dominance hierarchy. For example, a hierarchical organization uses a command hierarchy for top-down management. This can reduce time wasted in conflict over unimportant decisions, prevents inconsistent decisions from harming the operations of the organization, maintain alignment of a large population of workers with the goals of the owners (which the workers might not personally share) and, if promotion is based on merit, help ensure that the people with the best expertise make important decisions. This contrasts with group decision-making and systems which encourage decision-making and self-organization by front-line employees, who in some cases may have better information about customer needs or how to work efficiently. Dominance is only one aspect of organizational structure.
A power structure describes power and dominance relationships in a larger society. For example, a feudal society under a monarchy exhibits a strong dominance hierarchy in both economics and physical power, whereas dominance relationships in a society with democracy and capitalism are more complicated.
In business relationships, dominance is often associated with economic power. For example, a business may adopt a submissive attitude to customer preferences (stocking what customers want to buy) and complaints ("the customer is always right") in order to earn more money. A firm with monopoly power may be less responsive to customer complaints because it can afford to adopt a dominant position. In a business partnership a "silent partner" is one who adopts a submissive position in all aspects, but retains financial ownership and a share of the profits.
Two parties can be dominant in different areas. For example, in a friendship or romantic relationship, one person may have strong opinions about where to eat dinner, whereas the other has strong opinions about how to decorate a shared space. It could be beneficial for the party with weak preferences to be submissive in that area because it will not make them unhappy and avoids conflict with the party that would be unhappy.
The breadwinner model is associated with gender role assignments where the male in a heterosexual marriage would be dominant as they are responsible for economic provision.
Relationship satisfaction
Social exchange theory and Rusbult's investment model show that relationship satisfaction is based on three factors: rewards, costs, and comparison levels (Miller, 2012). Rewards refer to any aspects of the partner or relationship that are positive. Conversely, costs are the negative or unpleasant aspects of the partner or their relationship. The comparison level includes what each partner expects of the relationship. The comparison level is influenced by past relationships, and general relationship expectations they are taught by family and friends.
Individuals in long-distance relationships, LDRs, rated their relationships as more satisfying than individuals in proximal relationship, PRs. Alternatively, Holt and Stone (1988) found that long-distance couples who were able to meet with their partner at least once a month had similar satisfaction levels to unmarried couples who cohabitated. Also, the relationship satisfaction was lower for members of LDRs who saw their partner less frequently than once a month. LDR couples reported the same level of relationship satisfaction as couples in PRs, despite only seeing each other on average once every 23 days.
Social exchange theory and the am investment model both theorize that relationships that are high in cost would be less satisfying than relationships that are low in cost. LDRs have a higher level of costs than PRs, therefore, one would assume that LDRs are less satisfying than PRs. Individuals in LDRs are more satisfied with their relationships compared to individuals in PRs. This can be explained by unique aspects of the LDRs, how the individuals use relationship maintenance behaviors, and the attachment styles of the individuals in the relationships. Therefore, the costs and benefits of the relationship are subjective to the individual, and people in LDRs tend to report lower costs and higher rewards in their relationship compared to PRs.
Theories and empirical research
Confucianism
Confucianism is a study and theory of relationships, especially within hierarchies. Social harmony—the central goal of Confucianism—results in part from every individual knowing their place in the social order and playing their part well. Particular duties arise from each person's particular situation in relation to others. The individual stands simultaneously in several different relationships with different people: as a junior in relation to parents and elders; and as a senior in relation to younger siblings, students, and others. Juniors are considered in Confucianism to owe their seniors reverence and seniors have duties of benevolence and concern toward juniors. A focus on mutuality is prevalent in East Asian cultures to this day.
Minding relationships
The mindfulness theory of relationships shows how closeness in relationships may be enhanced. Minding is the "reciprocal knowing process involving the nonstop, interrelated thoughts, feelings, and behaviors of persons in a relationship." Five components of "minding" include:
Knowing and being known: seeking to understand the partner
Making relationship-enhancing attributions for behaviors: giving the benefit of the doubt
Accepting and respecting: empathy and social skills
Maintaining reciprocity: active participation in relationship enhancement
Continuity in minding: persisting in mindfulness
In popular culture
Popular perceptions
Popular perceptions of intimate relationships are strongly influenced by movies and television. Common messages are that love is predestined, love at first sight is possible, and that love with the right person always succeeds. Those who consume the most romance-related media tend to believe in predestined romance and that those who are destined to be together implicitly understand each other. These beliefs, however, can lead to less communication and problem-solving as well as giving up on relationships more easily when conflict is encountered.
Social media
Social media has changed the face of interpersonal relationships. Romantic interpersonal relationships are no less impacted. For example, in the United States, Facebook has become an integral part of the dating process for emerging adults. Social media can have both positive and negative impacts on romantic relationships. For example, supportive social networks have been linked to more stable relationships. However, social media usage can also facilitate conflict, jealousy, and passive-aggressive behaviors such as spying on a partner. Aside from direct effects on the development, maintenance, and perception of romantic relationships, excessive social network usage is linked to jealousy and dissatisfaction in relationships.
A growing segment of the population is engaging in purely online dating, sometimes but not always moving towards traditional face-to-face interactions. These online relationships differ from face-to-face relationships; for example, self-disclosure may be of primary importance in developing an online relationship. Conflict management differs, since avoidance is easier and conflict resolution skills may not develop in the same way. Additionally, the definition of infidelity is both broadened and narrowed, since physical infidelity becomes easier to conceal but emotional infidelity (e.g. chatting with more than one online partner) becomes a more serious offense.
See also
I and Thou
Impact of prostitution on mental health
Interactionism
Interpersonal attraction
Interpersonal tie
Outline of relationships
Relational mobility
Relational models theory
Relationship status
Relationship forming
Social connection
Socionics
Relationship Science
References
Further reading
External links | 0.762276 | 0.998527 | 0.761154 |
Phenotypic trait | A phenotypic trait, simply trait, or character state is a distinct variant of a phenotypic characteristic of an organism; it may be either inherited or determined environmentally, but typically occurs as a combination of the two. For example, having eye color is a character of an organism, while blue, brown and hazel versions of eye color are traits. The term trait is generally used in genetics, often to describe phenotypic expression of different combinations of alleles in different individual organisms within a single population, such as the famous purple vs. white flower coloration in Gregor Mendel's pea plants. By contrast, in systematics, the term character state is employed to describe features that represent fixed diagnostic differences among taxa, such as the absence of tails in great apes, relative to other primate groups.
Definition
A phenotypic trait is an obvious, observable, and measurable characteristic of an organism; it is the expression of genes in an observable way. An example of a phenotypic trait is a specific hair color or eye color. Underlying genes, that make up the genotype, determine the hair color, but the hair color observed is the phenotype. The phenotype is dependent on the genetic make-up of the organism, and also influenced by the environmental conditions to that of the organism is subjected across its ontogenetic development, including various epigenetic processes. Regardless of the degree of influence of genotype versus environment, the phenotype encompasses all of the characteristics of an organism, including traits at multiple levels of biological organization, ranging from behavior and evolutionary history of life traits (e.g., litter size), through morphology (e.g., body height and composition), physiology (e.g., blood pressure), cellular characteristics (e.g., membrane lipid composition, mitochondrial densities), components of biochemical pathways, and even messenger RNA.
Genetic origin of traits in diploid organisms
Different phenotypic traits are caused by different forms of genes, or alleles, which arise by mutation in a single individual and are passed on to successive generations.
Biochemistry of dominance and extensions to expression of traits
The biochemistry of the intermediate proteins determines how they interact in the cell. Therefore, biochemistry predicts how different combinations of alleles will produce varying traits.
Extended expression patterns seen in diploid organisms include facets of incomplete dominance, codominance, and multiple alleles. Incomplete dominance is the condition in which neither allele dominates the other in one heterozygote. Instead the phenotype is intermediate in heterozygotes. Thus you can tell that each allele is present in the heterozygote. Codominance refers to the allelic relationship that occurs when two alleles are both expressed in the heterozygote, and both phenotypes are seen simultaneously. Multiple alleles refers to the situation when there are more than 2 common alleles of a particular gene. Blood groups in humans is a classic example. The ABO blood group proteins are important in
determining blood type in humans, and this is determined by different alleles of the one locus.
Continuum versus categorical traits
Schizotypy is an example of a psychological phenotypic trait found in schizophrenia-spectrum disorders. Studies have shown that gender and age influences the expression of schizotypal traits. For instance, certain schizotypal traits may develop further during adolescence, whereas others stay the same during this period.
See also
Allometric engineering of traits
Character displacement
Eye color
Phene
Phenotype
Race (biology)
Skill
Citations
References
Lawrence, Eleanor (2005) Henderson's Dictionary of Biology. Pearson, Prentice Hall.
Classical genetics | 0.767849 | 0.991266 | 0.761143 |
Openness | Openness is an overarching concept that is characterized by an emphasis on transparency and collaboration. That is, openness refers to "accessibility of knowledge, technology and other resources; the transparency of action; the permeability of organisational structures; and the inclusiveness of participation". Openness can be said to be the opposite of closedness, central authority and secrecy.
Openness concept
Openness has been attributed to a wide array of approaches in very different contexts as outlined below. While there is no universally accepted definition of the overarching concept of openness, a 2017 comprehensive review concludes that:
Open terminology can refer to a higher-order concept (e.g. the ‘‘philosophy of openness’’); the nature of resources (e.g. ‘‘open data’’); the nature of processes (e.g. ‘‘open innovation’’); or the effects on specific domains (e.g. ‘‘open education’’) [...] The principles typically used to characterize this higher-order concept are: access to information and other resources; participation in an inclusive and often collaborative manner; transparency of resources and actions; and democracy or ‘‘democratization’’ such as the breaking up of exclusionary structures.
In government
Open government is the governing doctrine which holds that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight.
Openness in government applies the idea of freedom of information to information held by authorities and holds that citizens should have the right to see the operations and activities of government at work. Since reliable information is requisite for accountability, freedom of access to information about the government supports government accountability and helps protect other necessary rights.
In creative works
Open content and free content both refer to creative works that lack restrictions on how people can use, modify, and distribute them. The terms derive from open source software and free software, similar concepts that refer specifically to software.
In education
Open education refers to institutional practices and programmatic initiatives that broaden access to the learning and training traditionally offered through formal education systems. By eliminating barriers to entry, open education aids freedom of information by increasing accessibility.
Open Education advocates state people from all social classes worldwide have open access to high-quality education and resources. They help eliminate obstacles like high costs, outmoded materials, and legal instruments. These barriers impede collaboration among stakeholders. Cooperation is crucial to open education. The Open Education Consortium claims “Sharing is a fundamental attribute of education. Education means the sharing of knowledge, insights, and information with everybody. It is the foundation of new wisdom, ideas, talents, and understanding”. Open Educational Resources refer to learning materials that educators can improve and modify with permission from their publishers or authors. Creators of OERs are allowed to include a variety of items such as lesson plans, presentation slides, lecture videos, podcasts, worksheets, maps, and images.
There are legitimate tools like the Creative Commons’ licenses that students can access and use at liberty. They are allowed to translate and amend these materials. Public school teachers in the USA can share resources they developed as compliance for government-authorized standards in education. One of these is called the Common Core State Standards. Some teachers and school officials have recommended that OERs can help reduce expenses in production and distribution of course materials for primary and secondary institutions. Some teachers and school officials have recommended that OERs can help reduce expenses in production and distribution of course materials for primary and secondary institutions. Certain projects like the OER Commons as storage for open educational resources.
In science
Open science refers to the practice of allowing peer-reviewed research articles to be available online free of charge and free of most copyright and licensing restrictions. Benefits of this approach include: accelerated discovery and progress as researchers are free to use and build on the findings of others, giving back to the public as much research is paid for with public funds, and greater impact for one's work due to open access articles being accessible to a bigger audience.
In information technology
In Open-source software, the user is given access to the sources such as source code. In Open-source hardware, the user gets access to sources such as design documents and blueprints. Open data is data that can be freely used and shared by anyone.
In psychology
In psychology, openness to experience is one of the domains which are used to describe human personality in the Five Factor Model.
In business
See also
Accessibility
Free association
Free content
Free software
Glasnost
Open source
Open access: publishing
Open innovation
Open education
Open educational resources
Open-design movement
Open government
Open Knowledge Foundation
Open knowledge
Open-mindedness
Open text
Open gaming
Open patent
Open-source curriculum
Open-source governance
Open-source journalism
Open-source model
Open standard
Openness to experience
Secrecy: the opposite of openness
The Open Definition
Transparency: openness in a utilitarian view, economic openness, open economic or politic data, degree of openness, etc.
References and notes
Transparency (behavior) | 0.780081 | 0.975682 | 0.761111 |
Training and development | Training and development involves improving the effectiveness of organizations and the individuals and teams within them. Training may be viewed as being related to immediate changes in effectiveness via organized instruction, while development is related to the progress of longer-term organizational and employee goals. While training and development technically have differing definitions, the terms are often used interchangeably. Training and development have historically been topics within adult education and applied psychology, but have within the last two decades become closely associated with human resources management, talent management, human resources development, instructional design, human factors, and knowledge management.
Skills training has taken on varying organizational forms across industrialized economies. Germany has an elaborate vocational training system, whereas the United States and the United Kingdom are considered to generally have weak ones.
History
Aspects of training and development have been linked to ancient civilizations around the world. Early training-related articles appeared in journals marketed to enslavers in the Antebellum South and training approaches and philosophies were discussed extensively by Booker T. Washington. Early academic publishing related to training included a 1918 article in the Journal of Applied Psychology, which explored an undergraduate curriculum designed for applied psychologists.
By the 1960s and 70s, the field began developing theories and conducting theory-based research since it was historically rooted in trial-and-error intervention research, and new training methods were developed, such as the use of computers, television, case studies, and role playing. The scope of training and development also expanded to include cross-cultural training, a focus on the development of the individual employee, and the use of new organization development literature to frame training programs.
The 1980s focused on how employees received and implemented training programs, and encouraged the collection of data for evaluation purposes, particularly management training programs. The development piece of training and development became increasingly popular in the 90s, with employees more frequently being influenced by the concept of lifelong learning. It was in this decade that research revealing the impact and importance of fostering a training and development-positive culture was first conducted.
The 21st century brought more research in topics such as team-training, such as cross-training, which emphasizes training in coworkers' responsibilities.
Training practice and methods
Training and development encompass three main activities: training, education, and development. Differing levels and types of development may be used depending on the roles of employees in an organisation.
The "stakeholders" in training and development are categorized into several classes. The sponsors of training and development are senior managers, and line managers are responsible for coaching, resources, and performance. The clients of training and development are business planners, while the participants are those who undergo the processes. The facilitators are human resource management staff and the providers are specialists in the field. Each of these groups has its own agenda and motivations, which sometimes conflict with the others'.
Since the 2000s, training has become more trainee-focused, which allows those being trained more flexibility and active learning opportunities. These active learning techniques include exploratory/discovery learning, error management training, guided exploration, and mastery training. Typical projects in the field include executive and supervisory/management development, new employee orientation, professional skills training, technical/job training, customer-service training, sales-and-marketing training, and health-and-safety training. Training is particularly critical in high-reliability organizations, which rely on high safety standards to prevent catastrophic damage to employees, equipment, or the environment (e.g. nuclear power plants and operating rooms).
The instructional systems design approach (often referred to as the ADDIE model) is often used for designing learning programs and used for instructional design, or the process of designing, developing, and delivering learning content. There are 5 phases in the ADDIE model:
Needs assessment: problem identification. training needs analysis, determination of audience determined, identification of stakeholder's needs and required resources
Program design: mapping of learning intervention/implementation outline and evaluation methods
Program development: delivery method, production of learning outcomes, quality evaluation of learning outcome, development of communication strategy, required technology, and assessment and evaluation tools
Training delivery and implementation: participation in side-programs, training delivery, learning participation, and evaluation of business
Evaluation of training: formal evaluation, including the evaluation of learning and potential points of improvement
Many different training methods exist today, including both on- and off-the-job methods. Other training methods may include:
Apprenticeship training: training in which a worker entering the skilled trades is given thorough instruction and experience both on and off the job in the practical and theoretical aspects of the work
Co-operative programs and internship programs: training programs that combine practical, on-the-job experience with formal education, and are usually offered at colleges and universities
Classroom instruction: information is presented in lectures, demonstrations, films, and videotapes or through computer instruction
Self-directed learning: individuals work at their own pace during programmed instruction, which may include books, manuals, or computers that break down subject-matter content into highly-organized logical sequences that demand a continuous response on the trainee's part. It often includes the use of computer and/or online resources.
Audiovisual: methods used to teach the skills and procedures required for a number of jobs through audiovisual means
Simulation: used when it is not practical or safe to train people on the actual equipment or within the actual work environment
There is significant importance in training as it prepares employees for higher job responsibilities, shows employees they are valued, improves IT and computer processes, and tests the efficiency of new performance management systems. However, some believe training wastes time and money because, in certain cases, real life experience may be better than education, and organizations want to spend less, not more.
Needs assessments
Needs assessments, especially when the training is being conducted on a large-scale, are frequently conducted in order to gauge what needs to be trained, how it should be trained, and how extensively. Needs assessments in the training and development context often reveal employee and management-specific skills to develop (e.g. for new employees), organizational-wide problems to address (e.g. performance issues), adaptations needed to suit changing environments (e.g. new technology), or employee development needs (e.g. career planning). The needs assessment can predict the degree of effectiveness of training and development programs and how closely the needs were met, the execution of the training (i.e. how effective the trainer was), and trainee characteristics (e.g. motivation, cognitive abilities). Training effectiveness is typically done on an individual or team-level, with few studies investigating the impacts on organizations.
Principles
Aik and Tway (2006) estimated that only 20–30% of training given to employees are used within the next month. To mitigate the issue, they recommended some general principles to follow to increase the employees' desire to take part in the program. These include:
improving self-efficacy, which increases the learner's personal belief that they can fully comprehend the teachings
maintaining a positive attitude, as an uncooperative attitude towards learning could hinder the individual's capability to grasp the knowledge being provided
increasing competence, which is the ability for an individual to make good decisions efficiently
providing external motivators, such as a reward for the completion of the training or an extrinsic goal to follow
Motivation
Motivation is an internal process that influences an employee's behavior and willingness to achieve organizational goals. Creating a motivational environment within an organization can help employees achieve their highest level of productivity, and can create an engaged workforce that enhances individual and organizational performance. The model for motivation is represented by motivators separated into two different categories:
Intrinsic factors, which represent the internal factors of an individual, such as the , achievement recognition, responsibility, opportunity for meaningful work, involvement in decision making, and importance within the organization
Extrinsic factors, which are factors external to the individual, such as job security, salary, benefits, work conditions, and vacations
Both intrinsic and extrinsic motivators associate with employee performance in the workplace. A company's techniques to motivate employees may change over time depending on the current dynamics of the workplace.
Feedback
Traditional constructive feedback, also known as weakness-based feedback, can often be viewed as malicious from the employees’ perspective. When interpreted negatively, employees lose motivation on the job, affecting their production level.
Reinforcement is another principle of employee training and development. Studies have shown that reinforcement directly influences employee learning, which is highly correlated with performance after training. Reinforcement-based training emphasizes the importance of communication between managers and trainees in the workplace. The more the training environment can be a positive, nurturing experience, the faster attendees are apt to learn.
Benefits
The benefits of the training and development of employees include:
increased productivity and performance in the workplace
uniformity of work processes
skills and team development
reduced supervision and wastage
a decrease in safety-related accidents
improved organizational structure, designs and morale
better knowledge of policies and organization's goals
improved customer valuation
Enhancements in public service motivation among public employees
However, training and development may lead to adverse outcomes if it is not strategic and goal-oriented. Additionally, there is a lack of consensus on the long-term outcomes of training investments; and in the public sector, managers often hold conservative views about the effectiveness of training.
Barriers and access to training
Training and development are crucial to organizational performance, employee career advancement and engagement.
Disparities in training can be caused by several factors, including societal norms and cultural biases that significantly impact the distribution of training opportunities. Stereotypes and implicit biases can undermine the confidence and performance of minority groups to seek out training, affecting their career development.
The impact of excluding or limiting a person’s access to training and development opportunities can affect both the individual and the organization.
Disparities in training opportunities can adversely affect individuals from underrepresented groups, leading to slower career progression, reduced employee engagement, and limited professional growth. Individuals may experience lower self-esteem and decreased motivation due to perceived or actual access to development opportunities. For example, if a leadership training program does not have minority representation, individuals may lack the confidence to “break the glass ceiling” and seek out the opportunity for themselves.
When training opportunities are not equitably distributed, organizations may have reduced diversity in leadership and decision-making, which may stifle innovation and hinder organizational performance. Failure to address these disparities can lead to higher turnover rates and lower employee morale.
Management teams that are not diverse can be self-replicating as senior leaders’ demographic characteristics significantly impact the types of programs, policies and practices implemented in the organisation – i.e. there are more likely to be diversity programs if the management team is also diverse.
To address these disparities, organizations can implement diversity policies, provide bias training, and establish mentorship programs to support underrepresented groups. These may include:
implementing inclusive policies for addressing disparities: organizations should establish diversity and inclusion programs that specifically target training and development opportunities for underrepresented groups, which should focus on opportunities for future managers at the bottom of the hierarchy, as advancement to lower-level and middle-level positions is crucial for promotion to upper-level management. These policies can help ensure employees have equal access to career advancement resources and can increase the implementation of mechanisms for reporting discrimination or advancement barriers. Some efforts to support diversity and exclusion commitments in workplaces may be enshrined in law, such as the New Zealand Public Service Act 2020.
Developing mentorship and sponsorship programs: these programs can support underrepresented groups by providing them with guidance, networking opportunities, and advocacy within the organisation. Creating supportive networks for minority and gender groups can provide safe spaces for people identifying as minorities to develop programs that are suited to them and to provide a united voice to report ongoing discrimination.
Using data to track and address disparities in training opportunities: this may include censuses or regular pulse surveys or records of learning that are linked to a person’s self-identified attributes.
Occupation
The Occupational Information Network cites training and development specialists as having a bright outlook, meaning that the occupation will grow rapidly or have several job openings in the next few years. Related professions include training and development managers, (chief) learning officers, industrial-organizational psychologists, and organization development consultants. Training and development specialists are equipped with the tools to conduct needs analyses, build training programs to suit the organization's needs by using various training techniques, create training materials, and execute and guide training programs.
See also
Adult education
Andragogy
Microtraining
References
Further reading
Thelen, Kathleen. 2004. How Institutions Evolve: The Political Economy of Skills in Germany, Britain, the United States, and Japan. Cambridge University Press.
Human resource management
Organizational performance management
Training
Learning
Applied psychology
Personal development | 0.766471 | 0.992956 | 0.761072 |
Educational psychologist | An educational psychologist is a psychologist whose differentiating functions may include diagnostic and psycho-educational assessment, psychological counseling in educational communities (students, teachers, parents, and academic authorities), community-type psycho-educational intervention, and mediation, coordination, and referral to other professionals, at all levels of the educational system. Many countries use this term to signify those who provide services to students, their teachers, and families, while other countries use this term to signify academic expertise in teaching Educational Psychology.
Specific facts
Psychology is a well-developed discipline that allows different specializations, which include; clinical and health psychology, work and organizational psychology, educational psychology, etc. What differentiates an educational psychologist from other psychologists or specialists is constituted by an academic triangle whose vertexes are represented by three categories: teachers, students, and curricula (see diagram). The use of plural in these three cases assumes two meanings; the traditional or official one and other more general derived from our information and knowledge society. The plural also indicates that nowadays, we can no longer consider the average student or teacher, or a closed curriculum, but the enormous variety found in our students, teachers, and curricula. The triangle vertexes are connected by two-directional arrows, allowing four-fold typologies instead of the traditional two-way relationships (e.g., teacher-student). In this way, we can find, in different educational contexts, groups of good teachers and students (excellent teaching/learning processes and products), groups of good teachers but bad students, and groups of bad teachers and good students, producing in both cases lower levels of academic achievements. In addition, we can find groups of bad teachers and bad students (school failure).
This specific work of an educational psychologist takes place in different contexts: micro-, meso- and macro-systems. Microsystems refer to family contexts, where atmosphere, hidden curriculum, and expectations and behaviors of all family members determine, to a large extent, the educational development of each student. The term mesosystem refers to all variety of contexts found in educational institutions, knowing that different variables such as geographical location, institution marketing or type of teachers and students, etc., can influence the academic results of students. Macrosystem has a much more general and global nature, leading us, for example, to consider the influence that the different societies or countries have on educational final products. One illustrative example of this level can be the analyses carried out on data gathered by the PISA reports. This approach would be the essence of educational psychology versus school psychology for many of U.S. educational researchers and for Division 15 of APA.
Specific functions
There are four specific functions that are the essence of educational psychology. These are evaluation, psychological counseling, communitarian interventions, and referral to other professionals.
Evaluation involves collecting information, in a valid and reliable way, about the three target groups of the triangle diagram (in their respective contexts): teachers, students and curricula. (Not to be confused with curriculum vitae). The most noteworthy function is, without a doubt, formal (rather than informal) assessment. Evaluation is divided into least two main types: diagnosis (dysfunctions detection such as physical, sensory and intellectual impairments, dyslexia, attention-deficit/hyperactivity disorder, pervasive development disorders or autism spectrum disorders) and psycho-educational evaluation (detection of curriculum difficulties, poor school atmosphere or family problems, etc.). Evaluation implies detection, and, thanks to this, Prevention.
A second function, very relevant too, is psychological counseling. This must be directed to students, in their various dimensions (intellectual, obviously, but also their social, affective and professional dimensions), parents, as ‘paraprofessionals’ who may implement programs, selected or developed by educational psychologists, to solve their child/student problems, teachers, to whom will be offered psycho-educational support to face psychological difficulties that may be found when implementing and adapting curricula to diversity shown by students, academic authorities, who will be helped in their decision-making, regarding the teaching (teaching process) and administrative duties (providing necessary support for students with specific educational needs, decisions about promotion to the next level, and so on).
A third function is based on communitarian interventions, with three main facets: corrective, preventative, and optimizing interventions. If disruptive behavior occurs in particular moments and contexts, then a corrective intervention is required. If the aim is school violence reduction, then tertiary preventive intervention programs are needed. If an early diagnosis of learning difficulties is carried out, then psychologist has undertaken secondary prevention. If the aim is to use psycho-educational programs to prevent future school failure, then a primary preventative intervention program is put into practice. The complement to all of these interventions is constituted by a series of optimizing activities, meant for the academic, professional, social, family, and personal improvement of all agents in an educational community, especially learners.
A fourth function, or specific activity, is a referral of those with dysfunctions to other professionals, following a previous diagnostic evaluation, with the aim to coordinate future treatment implementation. This coordination will take place with parents, teachers, and other professionals, promoting collaboration among all educational agents in order to get the fastest and best case resolution. This second triangle represents the essential components of school psychology, for some European researchers or division 16 of APA.
Academic requirements
Recently a specific Doctoral degree (Masters in Scotland) is generally required for the professional preparation of educational psychologists in the UK. In this Doctorate in Educational Psychology, it is essential the main course which prepares educational psychologists for carrying out a diagnostic and psycho-educational assessment, psychological counseling to the educational communities, and all types of communitarian interventions (corrective, preventive and optimizing). Trainees also develop external professional practices (where the specific coordination, evaluation, counseling, and intervention functions will be put into practice) on placement in local authorities, as well as a final thesis. Equally, there are a series of theoretical areas that, due to their relevancy in the teaching/learning contexts, should be included, such as: classroom diversity, drug-dependency prevention, developmental disorders, learning difficulties, new technologies applied to educational contexts, and data analysis and interpretation. In sum, taking into account all of this, perhaps educational psychologists will be able to meet adequately the demands found in different educational institutions.
The following qualifications are required: an undergraduate degree in psychology (or approved postgraduate conversion course which confers the BPS Graduate Basis for Registration) and a BPS accredited Doctorate in Educational Psychology (3 years), or, for Scotland only, an accredited master's degree in Educational Psychology. Whilst teaching experience is relevant, it is no longer an entry requirement. At least one year's full-time experience working with children in educational, childcare, or community settings is required, and for some courses, this may be two years' experience.
To use the term Educational Psychologist in the UK, one will need to be registered with the Health Care Professionals Council (HCPC), which involves completing a course (Doctorate or Masters) approved by the HCPC.
In the United States
In the most basic sense of standards for education requirements in the United States, an educational psychologist needs a bachelor's degree, followed by a master's degree, and commonly finishing with a PhD or a PsyD in Educational Psychology. Specifically in California, an educational psychologist candidate (commonly referred to as a LEP or Licensed Educational Psychologist) must have a minimum of a master's degree in psychology or a related field in educational psychology. This degree must be coupled with a minimum of three years of experience, including two years as a credential school psychologist and one year of supervised professional experience in an accredited school psychology program. After completing these requirements, a candidate will then take an LEP examination to determine if the applicant will be approved. These requirements are widely accepted by the Board of Behavioral Sciences (BBS) and are considered the common standard. States may have varying standards, but the aforementioned standards are a commonality when working in a school setting. Another route that can be followed is in the research field. It involves many of the same standards without the direct link of being in a school setting. Those with a research setting are typically employed through a university and do research based on their own and others' findings. They may also teach at the university in their respective field.
Handbooks, application forms, and board reviews can be found at various websites:
http://apadiv15.org/wp-content/uploads/2014/01/Division15Bylaws2012.pdf
http://www.bbs.ca.gov/pdf/forms/lep/lepapp.pdf
http://www.caspwebcasts.org/new/index.php?option=com_content&view=article&id=325&Itemid=140
Job availability/outlook and salary
The average salary of an educational psychologist is variable dependent on where the psychologist depends on practicing. In a school setting, the professional can expect to make around $68,000 a year; however, these professionals are commonly school psychologists who have a different background than educational psychologists. An educational psychologist in the research and development field could expect to make around $84,000 per year. Both of these averages could be considered inflated, with another source listing the average income of an educational psychologist at around $57,000 per year. However, the resounding majority seems to sit at the $67,000 per year range, making the previous income average considerably modest. The latest statistics released in 2010 by the Bureau of Labor Statistics place the median annual salary at $72,540 – showing an increase over a four-year period – compared to the median household income of the United States which is currently at $51,000. Educational psychologists make approximately 40% more than the average American, making it an advantageous field of study.
Job outlook in the field of educational psychology is considered in good condition. By national estimates (US) growth in the field ranges from 11 to 15% between 2006 and 2022. A report released in 2006 the rate of growth was listed as 15% from 2006 to 2016, and a separate report released put the growth percentage at a modest 11% from 2012 to 2022. Considering most job outlook growth percentages of the time, educational psychologists had the highest in the psychology field and was also considered the highest amongst all occupations at the time of its release in 2006.
See also
References
External links
British Psychological Society
Division 15 of the American Psychological Association
Division 16 of the American Psychological Association
Journal of Educational Psychology
National Association of Principal Educational Psychologists
National Educational Psychological Service
Northern Arizona University Educational Psychology program
Standards for Educational and Psychological Testing | 0.779391 | 0.976445 | 0.761033 |
Thomas theorem | The Thomas theorem is a theory of sociology which was formulated in 1928 by William Isaac Thomas and Dorothy Swaine Thomas:
In other words, the interpretation of a situation causes the action. This interpretation is not objective. Actions are affected by subjective perceptions of situations. Whether there even is an objectively correct interpretation is not important for the purposes of helping guide individuals' behavior.
The Thomas theorem is not a theorem in the mathematical sense.
Definition of the situation
In 1923, W. I. Thomas stated more precisely that any definition of a situation would influence the present. In addition, after a series of definitions in which an individual is involved, such a definition would also "gradually [influence] a whole life-policy and the personality of the individual himself". Consequently, Thomas stressed societal problems such as intimacy, family, or education as fundamental to the role of the situation when detecting a social world "in which subjective impressions can be projected on to life and thereby become real to projectors".
The definition of the situation is a fundamental concept in symbolic interactionism. It involves a proposal upon the characteristics of a social situation (e.g. norms, values, authority, participants' roles), and seeks agreement from others in a way that can facilitate social cohesion and social action. Conflicts often involve disagreements over definitions of the situation in question. This definition may thus become an area contested between different stakeholders (or by an ego's sense of self-identity).
A definition of the situation is related to the idea of "framing" a situation. The construction, presentation, and maintenance of frames of interaction (i.e., social context and expectations), and identities (self-identities or group identities), are fundamental aspects of micro-level social interaction.
See also
Impression management
Linguistic relativity
Placebo
Pluralistic ignorance
Self-fulfilling prophecy
Sociology of knowledge
Tinkerbell effect
References
Further reading
Sociological theories
Cognitive biases | 0.772555 | 0.985073 | 0.761024 |
Scholarly method | The scholarly method or scholarship is the body of principles and practices used by scholars and academics to make their claims about their subjects of expertise as valid and trustworthy as possible, and to make them known to the scholarly public. It comprises the methods that systemically advance the teaching, research, and practice of a scholarly or academic field of study through rigorous inquiry. Scholarship is creative, can be documented, can be replicated or elaborated, and can be and is peer reviewed through various methods. The scholarly method includes the subcategories of the scientific method, with which scientists bolster their claims, and the historical method, with which historians verify their claims.
Methods
The historical method comprises the techniques and guidelines by which historians research primary sources and other evidence, and then write history. The question of the nature, and indeed the possibility, of sound historical method is raised in the philosophy of history, as a question of epistemology. History guidelines commonly used by historians in their work require external criticism, internal criticism, and synthesis.
The empirical method is generally taken to mean the collection of data on which to base a hypothesis or derive a conclusion in science. It is part of the scientific method, but is often mistakenly assumed to be synonymous with other methods. The empirical method is not sharply defined and is often contrasted with the precision of experiments, where data emerges from the systematic manipulation of variables. The experimental method investigates causal relationships among variables. An experiment is a cornerstone of the empirical approach to acquiring data about the world and is used in both natural sciences and social sciences. An experiment can be used to help solve practical problems and to support or negate theoretical assumptions.
The scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning. A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.
See also
Academia
Academic authorship
Academic publishing
Discipline (academia)
Doctor (title)
Ethics
Historical revisionism
History of scholarship
Manual of style
Professor
Source criticism
Urtext edition
Wissenschaft
References
Academia
Methodology | 0.768272 | 0.99055 | 0.761012 |
Scholasticide | Scholasticide, often used interchangeably with the terms educide and epistemicide, refers to the intended mass destruction of education in a specific place.
Educide has been used to describe the mass destruction in the Iraq War (2003–2011) and the Israeli invasion of Gaza (2023 – present).
Terminology
The terms are used interchangeably, covering various forms of the deliberate mass destruction of educational infrastructure. The suffix -cide, Latin for "killing", makes a connection with genocide.
The term scholasticide, where "schola-" is Latin for school, was first used by Karma Nabulsi in January 2009 in relation to the destruction of Palestinian educational infrastructure during the December 2008 to January 2009 Israeli war against Gaza.
The term "educide" was first used in March 2011 by Hans-Christof von Sponeck, UN Humanitarian Coordinator for Iraq, in a speech concerning Iraq at the Ghent University Conference, with the prefix referring to "education".
The term epistemicide was coined by Boaventura de Sousa Santos in 2014 and describes the destruction of knowledge systems, where episteme means knowledge.
Epistemicide can be used in light of a coloniser destroying the existing knowledge systems of the colonised, to replace them with knowledge systems controlled by the coloniser.
Elements
Characteristics that are often mentioned as elements of educide include, but are not necessarily limited to;
a strategy of intentional and systematic destruction of existing education;
situations of extreme violence (war, invasion, conflict, genocide, etc.);
destruction of educational institutions;
mass killings of academics and students;
and the destruction of educational materials.
Genocide
Educide has been linked to genocide. The United Nations (UN) established what constitutes a genocide in Article II of the Convention on the Prevention and Punishment of the Crime of Genocide. Genocide is the intentional killing and destruction of an entire group, based on their ethnicity, nationality, race, or religion.
Motives
Educide happens intentionally by an aggressor to a certain place and/or people. There are several reasons why an actor decides to commit educide. Motives for educide are for example colonisation. occupation, or annihilition of perceived threats.
When an actor wishes to impose power over a territory, this can go together with displacing or oppressing the native population and giving ruling power to the settlers or occupation forces. This process is often violent as the aggressor tries to suppress uprisings and resistance from the people living there. This suppression can happen via soft power, hard power, or both. Soft power is getting results not by coercion but by attraction, for example via payments, good affiliation, or education. Education plays a crucial role, as it reproduces ideas such as norms, and values of a society; identities and nationalism; and it determines how history is taught. Consequently, it establishes an idea of who is good and who is bad. The coloniser/occupier can use education institutions to control these ideas. It does so by taking over the educational infrastructure. In this process, the original infrastructure is often overruled and/or destroyed. The absence of the original educational infrastructure leads to the colonised/occupied having to mirror and adapt to the infrastructure that is present, that of the coloniser, and is thus (partially) under its influence and control. This can happen via hard power by coercing change and destroying the existing educational infrastructure, which leads to educide.
If an actor perceives a certain group of people form a threat to the actor's stability, security, or power, it could try to weaken or destroy this group of people. In this process, the actor could perceive the educational infrastructure as a danger, since this is where knowledge is developed that serves this group of people. The actor can then decide to destroy the educational infrastructure. For an example, see the case study on Iraq below.
Impact
The destruction of the educational infrastructure of a place has long-term effects on its people. Possible impacts of educide are
Inaccessibility to education;
Educational delays and disadvantages (e.g. higher illiteracy rate);
Underemployment: due to the absence of education, people will not reach their potential in the education they could have achieved, or are not able to receive their diploma. This leads to the possibility of people having work that does not reach their full capabilities and satisfaction in their job.;
Linguicide: if a certain language is no longer taught, there is a chance for the death of this language because people no longer know how to write or read it, nor develop their oral skills to the full potential;
Brain drain: during educide, academics and students can be targeted and thus fled their country, leading to a brain drain as high educated people leave the country;
Ethnic cleansing and/or genocide: removing the entire educational infrastructure can lead to a loss of collective memory and knowledge reproduction, and thus contribute to ethnic cleansing and/or genocide of a people and its identity;
Colonisation: by removing the existing educational infrastructure and replacing it with a new one, a coloniser can control the reproduction and access to knowledge, which are instrumental in colonising a territory.
International law
Educide is not discussed as a specific crime, such as genocide, in international law. Nevertheless, other elements in international humanitarian law (IHL) try to prevent the crimes committed during educide. IHL established for example the protection of schools and the protection of innocent civilians.
Cases
Iraq
The first case in which the term educide was used, was for Iraq. The claimed educide in Iraq happened through multiple episodes. Before the 1990s, Iraq's educational infrastructure was good and improving. During the 1990s, the UN imposed sanctions decreased the education's quality and accessibility, as it reduced income via trade which would end up going to the educational system. The situation worsened further during the Iraq War (2003–2011) and the war in Iraq against terrorist groups such as Daesh (2013–2017).
Iraq War
During the Iraq War, the US aimed for a regime change in Iraq to fight the perceived threat of terror and weapons of mass destruction as part of the "war on terror" campaign, which is described by critics as an illegitimate invasion motivated by imperialism (see also: Rationale for the Iraq War; Legitimacy of the 2003 invasion of Iraq; Opposition to the Iraq War, Protests against the Iraq War; and Legality of the Iraq War). Changing the regime, by changing the political and economic status quo, was partially done via educide. The US dismantled the educational system, replaced it with a system dependent on British and American universities, and promoted "Western values," which were criticised for being Islamophobic. However, this led primarily to the Iraqi educational infrastructure being destroyed systematically and with the intention to do so. Between 2003 and 2007, school attendance dropped by almost 70%, at least 280 academics were killed, and 30% of the total number of professors, doctors, and engineers left Iraq. Iraq's educational infrastructure faced many problems due to a lack of materials, a fear of bombings and kidnappings that prevented people from going to their educational institutions, and many professors fleeing the country. Additionally, around 2007, many could not perform their professions due to missing certificates, while governmental officials sometimes missed the actual education they claimed to have. The absence of education had a lot of impact on the Iraqi population, as more than 40% of the Iraqi people are aged 15 years or younger.
Daesh
Between 2013 and 2017 the educational infrastructure suffered again. Due to the war against Daesh (also known as "IS", "ISIS", or "ISIL"), the Iraqi government reduced assistance to 5.2 million children. As of 2023, 770,000 children are displaced. Between 2013 and 2017, in places under Daesh control, the curriculum was changed. Classes such as history or literature were replaced for religious teachings. The change of curriculum resulted in parents taking their children out of school to prevent indoctrination. Girls were disadvantaged in their access to education, with an adapted curriculum based on gender and having access to education only up to the age of 15. Girls dropped out due to marrying young, as this could prevent them from being forcefully married to a Daesh fighter. Moreover, from 2013 to 2017, educational institutions were attacked. Refworld reports that more than 100 attacks happened in which 300 people (students and staff) were injured. Additionally, there were targeted murders, kidnappings, and threats which harmed 60 students and more than 100 staff. Finally, the buildings of educational institutions were used for military purposes, such as Mosul University.
The educide in Iraq, although throughout different circumstances, was intending to change the status quo by replacing the existing educational infrastructure with a new one. In both cases, it led to significantly destruction of education and access to education.
Gaza, Palestine
The terms "educide," "scholasticide," and "epistemicide" have been used to describe Israeli repression of Palestinian educiational infrastructure. After the 2023 Hamas-led attack on Israel on October 7 (for more background information, see Blockade of the Gaza Strip and Gaza-Israel Conflict), Israel attacked Gaza. This attack has been ongoing ever since and developed into a war on Gaza as well as a possible case of genocide on Gaza. The war on Gaza has destroyed the entire infrastructure and thus forms a case of educide.
As a result of the war on Gaza, most educational institutions are destroyed, including 80% of all schools in Gaza. Critics have claimed that Israel has systematically and intentionally destroyed all the universities in Gaza. Some of the educational buildings have been changed into military bases by Israel. In addition to the material infrastructure, Israel has targeted those connected to the educational infrastructure, such as students and academics. In April 2024, 5479 students, 261 teachers, and 95 university professors were killed and 7819 students and 756 teachers injured. The numbers have been increasing ever since. According to the Ministry of Education and Higher Education in Gaza, 625000 students cannot access education. For further information, Scholars Against the War on Palestine, has listed the acts that are part of the scholasticide, and which have happened in Gaza.
See also
Cultural genocide
References
Bibliography
Further reading
Scholasticide Definition by Scholars Against the War on Palestine
Scholars Against the War on Palestine Toolkit
2000s neologisms
Cultural genocide
Attacks on schools
Impacts of the Iraq War
Israeli invasion of the Gaza Strip | 0.77595 | 0.980734 | 0.761001 |
Development theory | Development theory is a collection of theories about how desirable change in society is best achieved. Such theories draw on a variety of social science disciplines and approaches. In this article, multiple theories are discussed, as are recent developments with regard to these theories. Depending on which theory that is being looked at, there are different explanations to the process of development and their inequalities.
Modernization theory
Modernization theory is used to analyze the processes in which modernization in societies take place. The theory looks at which aspects of countries are beneficial and which constitute obstacles for economic development. The idea is that development assistance targeted at those particular aspects can lead to modernization of 'traditional' or 'backward' societies. Scientists from various research disciplines have contributed to modernization theory.
Sociological and anthropological modernization theory
The earliest principles of modernization theory can be derived from the idea of progress, which stated that people can develop and change their society themselves. Marquis de Condorcet was involved in the origins of this theory. This theory also states that technological advancements and economic changes can lead to changes in moral and cultural values. The French sociologist Émile Durkheim stressed the interdependence of institutions in a society and the way in which they interact with cultural and social unity. His work The Division of Labor in Society was very influential. It described how social order is maintained in society and ways in which primitive societies can make the transition to more advanced societies.
Other scientists who have contributed to the development of modernization theory are: David Apter, who did research on the political system and history of democracy; Seymour Martin Lipset, who argued that economic development leads to social changes which tend to lead to democracy; David McClelland, who approached modernization from the psychological side with his motivations theory; and Talcott Parsons who used his pattern variables to compare backwardness to modernity.
Linear stages of growth model
The linear stages of growth model is an economic model which is heavily inspired by the Marshall Plan which was used to revitalize Europe's economy after World War II. It assumes that economic growth can only be achieved by industrialization. Growth can be restricted by local institutions and social attitudes, especially if these aspects influence the savings rate and investments. The constraints impeding economic growth are thus considered by this model to be internal to society.
According to the linear stages of growth model, a correctly designed massive injection of capital coupled with intervention by the public sector would ultimately lead to industrialization and economic development of a developing nation.
The Rostow's stages of growth model is the most well-known example of the linear stages of growth model. Walt W. Rostow identified five stages through which developing countries had to pass to reach an advanced economy status: (1) Traditional society, (2) Preconditions for take-off, (3) Take-off, (4) Drive to maturity, (5) Age of high mass consumption. He argued that economic development could be led by certain strong sectors; this is in contrast to for instance Marxism which states that sectors should develop equally. According to Rostow's model, a country needed to follow some rules of development to reach the take-off: (1) The investment rate of a country needs to be increased to at least 10% of its GDP, (2) One or two manufacturing sectors with a high rate of growth need to be established, (3) An institutional, political and social framework has to exist or be created in order to promote the expansion of those sectors.
The Rostow model has serious flaws, of which the most serious are: (1) The model assumes that development can be achieved through a basic sequence of stages which are the same for all countries, a doubtful assumption; (2) The model measures development solely by means of the increase of GDP per capita; (3) The model focuses on characteristics of development, but does not identify the causal factors which lead development to occur. As such, it neglects the social structures that have to be present to foster development.
Economic modernization theories such as Rostow's stages model have been heavily inspired by the Harrod-Domar model which explains in a mathematical way the growth rate of a country in terms of the savings rate and the productivity of capital. Heavy state involvement has often been considered necessary for successful development in economic modernization theory; Paul Rosenstein-Rodan, Ragnar Nurkse and Kurt Mandelbaum argued that a big push model in infrastructure investment and planning was necessary for the stimulation of industrialization, and that the private sector would not be able to provide the resources for this on its own.
Another influential theory of modernization is the dual-sector model by Arthur Lewis. In this model Lewis explained how the traditional stagnant rural sector is gradually replaced by a growing modern and dynamic manufacturing and service economy.
Because of the focus on the need for investments in capital, the Linear Stages of Growth Models are sometimes referred to as suffering from ‘capital fundamentalism’.
Critics of modernization theory
Modernization theory observes traditions and pre-existing institutions of so-called "primitive" societies as obstacles to modern economic growth. Modernization which is forced from outside upon a society might induce violent and radical change, but according to modernization theorists it is generally worth this side effect. Critics point to traditional societies as being destroyed and slipping away to a modern form of poverty without ever gaining the promised advantages of modernization.
Structuralism
Structuralism is a development theory which focuses on structural aspects which impede the economic growth of developing countries. The unit of analysis is the transformation of a country's economy from, mainly, a subsistence agriculture to a modern, urbanized manufacturing and service economy. Policy prescriptions resulting from structuralist thinking include major government intervention in the economy to fuel the industrial sector, known as import substitution industrialization (ISI). This structural transformation of the developing country is pursued in order to create an economy which in the end enjoys self-sustaining growth. This can only be reached by ending the reliance of the underdeveloped country on exports of primary goods (agricultural and mining products), and pursuing inward-oriented development by shielding the domestic economy from that of the developed economies. Trade with advanced economies is minimized through the erection of all kinds of trade barriers and an overvaluation of the domestic exchange rate; in this way the production of domestic substitutes of formerly imported industrial products is encouraged. The logic of the strategy rests on the infant industry argument, which states that young industries initially do not have the economies of scale and experience to be able to compete with foreign competitors and thus need to be protected until they are able to compete in the free market. The Prebisch–Singer hypothesis states that over time the terms of trade for commodities deteriorate compared to those for manufactured goods, because the income elasticity of demand of manufactured goods is greater than that of primary products. If true, this would also support the ISI strategy.
Structuralists argue that the only way Third World countries can develop is through action by the state. Third world countries have to push industrialization and have to reduce their dependency on trade with the First World, and trade among themselves.
The roots of structuralism lie in South America, and particularly Chile. In 1950, Raul Prebisch went to Chile to become the first director of the Economic Commission for Latin America. In Chile, he cooperated with Celso Furtado, Aníbal Pinto, Osvaldo Sunkel, and Dudley Seers, who all became influential structuralists.
Dependency theory
Dependency theory is essentially a follow-up to structuralist thinking, and shares many of its core ideas. Whereas structuralists did not consider that development would be possible at all unless a strategy of delinking and rigorous ISI was pursued, dependency thinking could allow development with external links with the developed parts of the globe. However, this kind of development is considered to be "dependent development", i.e., it does not have an internal domestic dynamic in the developing country and thus remains highly vulnerable to the economic vagaries of the world market. Dependency thinking starts from the notion that resources flow from the ‘periphery’ of poor and underdeveloped states to a ‘core’ of wealthy countries, which leads to accumulation of wealth in the rich states at the expense of the poor states. Contrary to modernization theory, dependency theory states that not all societies progress through similar stages of development. Periphery states have unique features, structures and institutions of their own and are considered weaker with regards to the world market economy, while the developed nations have never been in this colonized position in the past. Dependency theorists argue that underdeveloped countries remain economically vulnerable unless they reduce their connections to the world market.
Dependency theory states that poor nations provide natural resources and cheap labor for developed nations, without which the developed nations could not have the standard of living which they enjoy. When underdeveloped countries try to remove the Core's influence, the developed countries hinder their attempts to keep control. This means that poverty of developing nations is not the result of the disintegration of these countries in the world system, but because of the way in which they are integrated into this system.
In addition to its structuralist roots, dependency theory has much overlap with Neo-Marxism and World Systems Theory, which is also reflected in the work of Immanuel Wallerstein, a famous dependency theorist. Wallerstein rejects the notion of a Third World, claiming that there is only one world which is connected by economic relations (World Systems Theory). He argues that this system inherently leads to a division of the world in core, semi-periphery and periphery. One of the results of expansion of the world-system is the commodification of things, like natural resources, labor and human relationships.
Basic needs
The basic needs model was introduced by the International Labour Organization in 1976, mainly in reaction to prevalent modernization- and structuralism-inspired development approaches, which were not achieving satisfactory results in terms of poverty alleviation and combating inequality in developing countries. It tried to define an absolute minimum of resources necessary for long-term physical well-being. The poverty line which follows from this, is the amount of income needed to satisfy those basic needs. The approach has been applied in the sphere of development assistance, to determine what a society needs for subsistence, and for poor population groups to rise above the poverty line. Basic needs theory does not focus on investing in economically productive activities. Basic needs can be used as an indicator of the absolute minimum an individual needs to survive.
Proponents of basic needs have argued that elimination of absolute poverty is a good way to make people active in society so that they can provide labor more easily and act as consumers and savers. There have been also many critics of the basic needs approach. It would lack theoretical rigour, practical precision, be in conflict with growth promotion policies, and run the risk of leaving developing countries in permanent turmoil.
Neoclassical theory
Neoclassical development theory has it origins in its predecessor: classical economics. Classical economics was developed in the 18th and 19th centuries and dealt with the value of products and on which production factors it depends. Early contributors to this theory are Adam Smith and David Ricardo. Classical economists argued – as do the neoclassical ones – in favor of the free market, and against government intervention in those markets. The 'invisible hand' of Adam Smith makes sure that free trade will ultimately benefit all of society. John Maynard Keynes was a very influential classical economist as well, having written his General Theory of Employment, Interest, and Money in 1936.
Neoclassical development theory became influential towards the end of the 1970s, fired by the election of Margaret Thatcher in the UK and Ronald Reagan in the USA. Also, the World Bank shifted from its Basic Needs approach to a neoclassical approach in 1980. From the beginning of the 1980s, neoclassical development theory really began to roll out.
Structural adjustment
One of the implications of the neoclassical development theory for developing countries were the Structural Adjustment Programmes (SAPs) which the World Bank and the International Monetary Fund wanted them to adopt. Important aspects of those SAPs include:
Fiscal austerity (reduction in government spending)
Privatization (which should both raise money for governments and improve efficiency and financial performance of the firms involved)
Trade liberalization, currency devaluation and the abolition of marketing boards (to maximize the static comparative advantage the developing country has on the global market)
Retrenchment of the government and deregulation (in order to stimulate the free market)
These measures are more or less reflected by the themes which were identified by the Institute of International Economics which were believed to be necessary for the recovery of Latin America from the economic and financial crises of the 1980s. These themes are known as the Washington consensus, a termed coined in 1989 by the economist John Williamson.
Recent trends
Post-development theory
Postdevelopment theory is a school of thought which questions the idea of national economic development altogether. According to postdevelopment scholars, the goal of improving living standards leans on arbitrary claims as to the desirability and possibility of that goal. Postdevelopment theory arose in the 1980s and 1990s.
According to postdevelopment theorists, the idea of development is just a 'mental structure' (Wolfgang Sachs) which has resulted in a hierarchy of developed and underdeveloped nations, of which the underdeveloped nations desire to be like developed nations. Development thinking has been dominated by the West and is very ethnocentric, according to Sachs. The Western lifestyle may neither be a realistic nor a desirable goal for the world's population, postdevelopment theorists argue. Development is being seen as a loss of a country's own culture, people's perception of themselves and modes of life. According to Majid Rahnema, another leading postdevelopment scholar, things like notions of poverty are very culturally embedded and can differ a lot among cultures. The institutes which voice the concern over underdevelopment are very Western-oriented, and postdevelopment calls for a broader cultural involvement in development thinking.
Postdevelopment proposes a vision of society which removes itself from the ideas which currently dominate it. According to Arturo Escobar, postdevelopment is interested instead in local culture and knowledge, a critical view against established sciences and the promotion of local grassroots movements. Also, postdevelopment argues for structural change in order to reach solidarity, reciprocity, and a larger involvement of traditional knowledge.
Sustainable development
Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs. (Brundtland Commission) There exist more definitions of sustainable development, but they all have to do with the carrying capacity of the earth and its natural systems and the challenges faced by humanity. Sustainable development can be broken up into environmental sustainability, economic sustainability and sociopolitical sustainability. The book Limits to Growth, commissioned by the Club of Rome, gave huge momentum to the thinking about sustainability. Global warming issues are also problems which are emphasized by the sustainable development movement. This led to the 1997 Kyoto Accord, with the plan to cap greenhouse-gas emissions.
Opponents of the implications of sustainable development often point to the environmental Kuznets curve. The idea behind this curve is that, as an economy grows, it shifts towards more capital and knowledge-intensive production. This means that as an economy grows, its pollution output increases, but only until it reaches a particular threshold where production becomes less resource-intensive and more sustainable. This means that a pro-growth, not an anti-growth policy is needed to solve the environmental problem. But the evidence for the environmental Kuznets curve is quite weak. Also, empirically spoken, people tend to consume more products when their income increases. Maybe those products have been produced in a more environmentally friendly way, but on the whole the higher consumption negates this effect. There are people like Julian Simon however who argue that future technological developments will resolve future problems.
Human development theory
Human development theory is a theory which uses ideas from different origins, such as ecology, sustainable development, feminism and welfare economics. It wants to avoid normative politics and is focused on how social capital and instructional capital can be deployed to optimize the overall value of human capital in an economy.
Amartya Sen and Mahbub ul Haq are the most well-known human development theorists. The work of Sen is focused on capabilities: what people can do and be. It is these capabilities, rather than the income or goods that they receive (as in the Basic Needs approach), that determine their well-being. This core idea also underlies the construction of the Human Development Index, a human-focused measure of development pioneered by the UNDP in its Human Development Reports; this approach has become popular the world over, with indexes and reports published by individual counties, including the American Human Development Index and Report in the United States. The economic side of Sen's work can best be categorized under welfare economics, which evaluates the effects of economic policies on the well-being of peoples. Sen wrote the influential book Development as Freedom which added an important ethical side to development economics.
See also
Development (disambiguation)
Ecological modernization theory
Economic development
International development
World-systems theory
Progress
Progressivism
Development-induced displacement
Manifest destiny
White mans burden
Civilizing mission
Christian mission
White savior
References
Further reading
M. P. Cowen and R. W. Shenton, Doctrines of Development, Routledge (1996), .
Peter W. Preston, Development Theory: An Introduction to the Analysis of Complex Change, Wiley-Blackwell (1996), .
Peter W. Preston, Rethinking Development, Routledge & Kegan Paul Books Ltd (1988), .
Richard Peet with Elaine Hartwick, "Theories of Development", The Guilford Press (1999)
Walt Whitman Rostow, (1959), The stages of economic growth. The Economic History Review, 12: 1–16.
Tourette, J. E. L. (1964), Technological change and equilibrium growth in the Harrod-Domar model. Kyklos, 17: 207–226.
Durkheim, Emile. The Division of Labor in Society. Trans. Lewis A. Coser. New York: Free Press, 1997, pp. 39, 60, 108.
John Rapley (2007), Understanding Development. Boulder, London: Lynne Rienner Publishers
Meadows et al. (1972), The Limits to Growth, Universe Books,
Hunt, D. (1989), Economic Theories of Development: An Analysis of Competing Paradigms. London: Harvester Wheatsheaf
Greig, A., D. Hulme and M. Turner (2007). "Challenging Global Inequality. Development Theory and Practice in the 21st century". Palgrave Macmillan, New York.
International trade theory
Sociological theories | 0.766852 | 0.992344 | 0.760981 |
Vocal pedagogy | Vocal pedagogy is the study of the art and science of voice instruction. It is used in the teaching of singing and assists in defining what singing is, how singing works, and how proper singing technique is accomplished.
Vocal pedagogy covers a broad range of aspects of singing, ranging from the physiological process of vocal production to the artistic aspects of interpretation of songs from different genres or historical eras. Typical areas of study include:
Human anatomy and physiology as it relates to the physical process of singing.
Breathing and air support for singing
Posture for singing
Phonation
Vocal resonation or voice projection
Diction, vowels and articulation
Vocal registration
Sostenuto and legato for singing
Other singing elements, such as range extension, tone quality, vibrato, coloratura
Vocal health and voice disorders related to singing
Vocal styles, such as learning to sing opera, belt, or art song
Phonetics
Voice classification
All of these different concepts are a part of developing proper vocal technique. Not all voice teachers have the same opinions within every topic of study which causes variations in pedagogical approaches and vocal technique.
History
Within Western culture, the study of vocal pedagogy began in Ancient Greece. Scholars such as Alypius and Pythagoras studied and made observations on the art of singing. It is unclear, however, whether the Greeks ever developed a systematic approach to teaching singing as little writing on the subject survives today.
The first surviving record of a systematized approach to teaching singing was developed in the medieval monasteries of the Roman Catholic Church sometime near the beginning of the 13th century. As with other fields of study, the monasteries were the center of musical intellectual life during the medieval period and many men within the monasteries devoted their time to the study of music and the art of singing. Highly influential in the development of a vocal pedagogical system were monks Johannes de Garlandia and Jerome of Moravia who were the first to develop a concept of vocal registers. These men identified three registers: chest voice, throat voice, and head voice (pectoris, guttoris, and capitis). Their concept of head voice, however, is much more similar to the modern pedagogists understanding of the falsetto register. Other concepts discussed in the monastic system included vocal resonance, voice classification, breath support, diction, and tone quality to name a few. The ideas developed within the monastic system highly influenced the development of vocal pedagogy over the next several centuries including the Bel Canto style of singing.
With the onset of the Renaissance in the 15th century, the study of singing began to move outside of the church. The courts of rich patrons, such as the Dukes of Burgundy who supported the Burgundian School and the Franco-Flemish School, became secular centers of study for singing and all other areas of musical study. The vocal pedagogical methods taught in these schools, however, were based on the concepts developed within the monastic system. Many of the teachers within these schools had their initial musical training from singing in church choirs as children. The church also remained at the forefront of musical composition at this time and remained highly influential in shaping musical tastes and practices both in and outside the church. It was the Catholic Church that first popularized the use of castrato singers in the 16th century, which ultimately led to the popularity of castrato voices in Baroque and Classical operas.
While the church maintained its dominance on intellectual and cultural life, there are individual examples of writers on voice pedagogy from this period who were from outside the church who put forward new ways of thinking and talking about the art of singing; although they lacked the wider influence of the monastic writers. The physician and court singer Giovanni Camillo Maffei was the first writer on vocal pedagogy to incorporate knowledge of the physiology of the voice into a theory of singing in his treatise Discorso delta voce e del modo d'apparare di cantar di garganta, and Scala naturale, overo Fantasia dolcissima, intorno alle cose occulte e desiderate nella filosofia (Venice, 1564).
It was not until the development of opera in the 17th century that vocal pedagogy began to break away from some of the established thinking of the monastic writers and develop deeper understandings of the physical process of singing and its relation to key concepts like vocal registration and vocal resonation. It was also during this time that noted voice teachers began to emerge. Giulio Caccini is an example of an important early Italian voice teacher. In the late 17th century, the bel canto method of singing began to develop in Italy. This style of singing had a huge impact on the development of opera and the development of vocal pedagogy during the Classical and Romantic periods. It was during this time that teachers and composers first began to identify singers by and write roles for more specific voice types. However, it was not until the 19th century that more clearly defined voice classification systems like the German Fach system emerged. Within these systems, more descriptive terms were used in classifying voices such as coloratura soprano and lyric soprano.
Voice teachers in the 19th century continued to train singers for careers in opera. Manuel Patricio Rodríguez García is often considered one of the most important voice teachers of the 19th century, and is credited with the development of the laryngoscope and the beginning of modern voice pedagogy.
The field of voice pedagogy became more fully developed in the middle of the 20th century. A few American voice teachers began to study the science, anatomy, and physiology of singing, especially Ralph Appelman at Indiana University, Oren Brown at the Washington University School of Medicine and later the Juilliard School, and William Vennard at the University of Southern California. This shift in approach to the study of singing led to the rejection of many of the assertions of the bel canto singing method, most particularly in the areas of vocal registration and vocal resonation. As a result, there are currently two predominating schools of thought among voice teachers today, those who maintain the historical positions of the bel canto method and those who choose to embrace more contemporary understandings based in current knowledge of human anatomy and physiology. There are also those teachers who borrow ideas from both perspectives, creating a hybrid of the two.
Appelman and Vennard were also part of a group of voice instructors who developed courses of study for beginning voice teachers, adding these scientific ideas to the standard exercises and empirical ways to improve vocal technique, and by 1980 the subject of voice pedagogy was beginning to be included in many college music degree programs for singers and vocal music educators.
More recent works by authors such as Richard Miller and Johan Sundberg have increased the general knowledge of voice teachers, and scientific and practical aspects of voice pedagogy continue to be studied and discussed by professionals. In addition, the creation of organisations such as the National Association of Teachers of Singing (now an international organization of Vocal Instructors) has enabled voice teachers to establish more of a consensus about their work, and has expanded the understanding of what singing teachers do.
Topics of study
Pedagogical philosophy
There are basically three major approaches to vocal pedagogy. They're all related to how the mechanistic and psychological controls are employed while singing. Some voice instructors advocate an extreme mechanistic approach that believes that singing is largely a matter of getting the right physical parts in the right places at the right time, and that correcting vocal faults is accomplished by calling direct attention to the parts which are not working well. On the other extreme, is the school of thought that believes that attention should never be directed to any part of the vocal mechanism—that singing is a matter of producing the right mental images of the desired tone, and that correcting vocal faults is achieved by learning to think the right thoughts and by releasing the emotions through interpretation of the music. Most voice teachers, however, believe that the truth lies somewhere in between the two extremes and adopt a composite of those two approaches.
The nature of vocal sounds
Physiology of vocal sound production
There are four physical processes involved in producing vocal sound: respiration, phonation, resonation, and articulation. These processes occur in the following sequence:
Breath is taken
Sound is initiated in the larynx
The vocal resonators receive the sound and influence it
The articulators shape the sound into recognizable units
Although these four processes are to be considered separately, in actual practice they merge into one coordinated function. With an effective singer or speaker, one should rarely be reminded of the process involved as their mind and body are so coordinated that one only perceives the resulting unified function. Many vocal problems result from a lack of coordination within this process.
Respiration
In its most basic sense, respiration is the process of moving air in and out of the body—inhalation and exhalation. Sound is produced in the larynx. But producing the sound would not be possible without a power source: the flow of air from the lungs. This flow sets the vocal folds into motion to produce sound. Breathing for singing and speaking is a more controlled process than is the ordinary breathing used for sustaining life. The controls applied to exhalation are particularly important in good vocal technique.
Phonation
Phonation is the process of producing vocal sound by the vibration of the vocal folds that is in turn modified by the resonance of the vocal tract. It takes place in the larynx when the vocal folds are brought together and breath pressure is applied to them in such a way that vibration ensues causing an audible source of acoustic energy, i.e., sound, which can then be modified by the articulatory actions of the rest of the vocal apparatus. The vocal folds are brought together primarily by the action of the interarytenoid muscles, which pull the arytenoid cartilages together.
Resonation
Vocal resonation is the process by which the basic product of phonation is enhanced in timbre and/or intensity by the air-filled cavities through which it passes on its way to the outside air. Various terms related to the resonation process include amplification, enrichment, enlargement, improvement, intensification, and prolongation, although in strictly scientific usage acoustic authorities would question most of them. The main point to be drawn from these terms by a singer or speaker is that the result of resonation is, or should be, to make a better sound.
There are seven areas that may be listed as possible vocal resonators. In sequence from the lowest within the body to the highest, these areas are the chest, the tracheal tree, the larynx itself, the pharynx, the oral cavity, the nasal cavity, and the sinuses.
Research has shown that the larynx, the pharynx and the oral cavity are the main resonators of vocal sound, with the nasal cavity only coming into play in nasal consonants, or nasal vowels, such as those found in French. This main resonating space, from above the vocal folds to the lips is known as the vocal tract. Many voice users experience sensations in the sinuses that may be misconstrued as resonance. However, these sensations are caused by sympathetic vibrations, and are a result, rather than a cause, of efficient vocal resonance.
Articulation
Articulation is the process by which the joint product of the vibrator and the resonators is shaped into recognizable speech sounds through the muscular adjustments and movements of the speech organs. These adjustments and movements of the articulators result in verbal communication and thus form the essential difference between the human voice and other musical instruments. Singing without understandable words limits the voice to nonverbal communication. In relation to the physical process of singing, vocal instructors tend to focus more on active articulation as opposed to passive articulation. There are five basic active articulators: the lip ("labial consonants"), the flexible front of the tongue ("coronal consonants"), the middle/back of the tongue ("dorsal consonants"), the root of the tongue together with the epiglottis ("pharyngeal consonants"), and the glottis ("glottal consonants"). These articulators can act independently of each other, and two or more may work together in what is called coarticulation.
Unlike active articulation, passive articulation is a continuum without many clear-cut boundaries. The places linguolabial and interdental, interdental and dental, dental and alveolar, alveolar and palatal, palatal and velar, velar and uvular merge into one another, and a consonant may be pronounced somewhere between the named places.
In addition, when the front of the tongue is used, it may be the upper surface or blade of the tongue that makes contact ("laminal consonants"), the tip of the tongue ("apical consonants"), or the under surface ("sub-apical consonants"). These articulations also merge into one another without clear boundaries.
Interpretation
Interpretation is sometimes listed by voice teachers as a fifth physical process even though strictly speaking it is not a physical process. The reason for this is that interpretation does influence the kind of sound a singer makes which is ultimately achieved through a physical action the singer is doing. Although teachers may acquaint their students with musical styles and performance practices and suggest certain interpretive effects, most voice teachers agree that interpretation can not be taught. Students who lack a natural creative imagination and aesthetic sensibility can not learn it from someone else. Failure to interpret well is not a vocal fault, even though it may affect vocal sound significantly.
Classification of vocal sounds
Vocal sounds are divided into two basic categories—vowels and consonants—with a wide variety of sub-classifications. Voice teachers and serious voice students spend a great deal of time studying how the voice forms vowels and consonants, and studying the problems that certain consonants or vowels may cause while singing. The International Phonetic Alphabet is used frequently by voice teachers and their students.
Problems in describing vocal sounds
Describing vocal sound is an inexact science largely because the human voice is a self-contained instrument. Since the vocal instrument is internal, the singer's ability to monitor the sound produced is complicated by the vibrations carried to the ear through the Eustachean (auditory) tube and the bony structures of the head and neck. In other words, most singers hear something different in their ears/head than what a person listening to them hears. As a result, voice teachers often focus less on how it "sounds" and more on how it "feels". Vibratory sensations resulting from the closely related processes of phonation and resonation, and kinesthetic ones arising from muscle tension, movement, body position, and weight serve as a guide to the singer on correct vocal production.
Another problem in describing vocal sound lies in the vocal vocabulary itself. There are many schools of thought within vocal pedagogy and different schools have adopted different terms, sometimes from other artistic disciplines. This has led to the use of a plethora of descriptive terms applied to the voice which are not always understood to mean the same thing. Some terms sometimes used to describe a quality of a voice's sound are: warm, white, dark, light, round, reedy, spread, focused, covered, swallowed, forward, ringing, hooty, bleaty, plummy, mellow, pear-shaped, and so forth.
Body alignment
The singing process functions best when certain physical conditions of the body exist. The ability to move air in and out of the body freely and to obtain the needed quantity of air can be seriously affected by the body alignment of the various parts of the breathing mechanism. A sunken chest position will limit the capacity of the lungs, and a tense abdominal wall will inhibit the downward travel of the diaphragm. Good body alignment allows the breathing mechanism to fulfill its basic function efficiently without any undue expenditure of energy. Good body alignment also makes it easier to initiate phonation and to tune the resonators as proper alignment prevents unnecessary tension in the body. Voice Instructors have also noted that when singers assume good body alignment it often provides them with a greater sense of self-assurance and poise while performing. Audiences also tend to respond better to singers with good body alignment. Habitual good body alignment also ultimately improves the overall health of the body by enabling better blood circulation and preventing fatigue and stress on the body.
Breathing and breath support
All singing begins with breath. All vocal sounds are created by vibrations in the larynx caused by air from the lungs. Breathing in everyday life is a subconscious bodily function which occurs naturally; however, the singer must have control of the intake and exhalation of breath to achieve maximum results from their voice.
Natural breathing has three stages: a breathing-in period, a breathing-out period, and a resting or recovery period; these stages are not usually consciously controlled. Within singing there are four stages of breathing:
breathing-in period (inhalation)
setting up controls period (suspension)
controlled exhalation period (phonation)
recovery period
These stages must be under conscious control by the singer until they become conditioned reflexes. Many singers abandon conscious controls before their reflexes are fully conditioned which ultimately leads to chronic vocal problems.
Voice classification
In European classical music and opera, voices are treated like musical instruments. Composers who write vocal music must have an understanding of the skills, talents, and vocal properties of singers. Voice classification is the process by which human singing voices are evaluated and are thereby designated into voice types. These qualities include but are not limited to: vocal range, vocal weight, vocal tessitura, vocal timbre, and vocal transition points such as breaks and lifts within the voice. Other considerations are physical characteristics, speech level, scientific testing, and vocal registration. The science behind voice classification developed within European classical music and has been slow in adapting to more modern forms of singing. Voice classification is often used within opera to associate possible roles with potential voices. There are currently several different systems in use within classical music including: the German Fach system and the choral music system among many others. No system is universally applied or accepted.
However, most classical music systems acknowledge seven different major voice categories. Women are typically divided into three groups: soprano, mezzo-soprano, and contralto. Men are usually divided into four groups: countertenor, tenor, baritone, and bass. When considering children's voices, an eighth term, treble, can be applied. Within each of these major categories there are several sub-categories that identify specific vocal qualities like coloratura facility and vocal weight to differentiate between voices.
Within choral music, singers voices are divided solely on the basis of vocal range. Choral music most commonly divides vocal parts into high and low voices within each sex (SATB). As a result, the typical choral situation affords many opportunities for misclassification to occur. Since most people have medium voices, they must be assigned to a part that is either too high or too low for them; the mezzo-soprano must sing soprano or alto and the baritone must sing tenor or bass. Either option can present problems for the singer, but for most singers there are fewer dangers in singing too low than in singing too high.
Within contemporary forms of music (sometimes referred to as Contemporary Commercial Music), singers are classified by the style of music they sing, such as jazz, pop, blues, soul, country, folk, and rock styles. There is currently no authoritative voice classification system within non-classical music. Attempts have been made to adopt classical voice type terms to other forms of singing but such attempts have been met with controversy. The development of voice categorizations were made with the understanding that the singer would be using classical vocal technique within a specified range using unamplified (no microphones) vocal production. Since contemporary musicians use different vocal techniques, microphones, and are not forced to fit into a specific vocal role, applying such terms as soprano, tenor, baritone, etc. can be misleading or even inaccurate.
Dangers of quick identification
Many voice teachers warn of the dangers of quick identification. Premature concern with classification can result in misclassification, with all its attendant dangers. Vennard says:
"I never feel any urgency about classifying a beginning student. So many premature diagnoses have been proved wrong, and it can be harmful to the student and embarrassing to the teacher to keep striving for an ill-chosen goal. It is best to begin in the middle part of the voice and work upward and downward until the voice classifies itself."
Most voice teachers believe that it is essential to establish good vocal habits within a limited and comfortable range before attempting to classify the voice. When techniques of posture, breathing, phonation, resonation, and articulation have become established in this comfortable area, the true quality of the voice will emerge and the upper and lower limits of the range can be explored safely. Only then can a tentative classification be arrived at, and it may be adjusted as the voice continues to develop. Many acclaimed voice instructors suggest that teachers begin by assuming that a voice is of a medium classification until it proves otherwise. The reason for this is that the majority of individuals possess medium voices and therefore this approach is less likely to misclassify or damage the voice.
Vocal registration
Vocal registration refers to the system of vocal registers within the human voice. A register in the human voice is a particular series of tones, produced in the same vibratory pattern of the vocal folds, and possessing the same quality. Registers originate in laryngeal function. They occur because the vocal folds are capable of producing several different vibratory patterns. Each of these vibratory patterns appears within a particular range of pitches and produces certain characteristic sounds. The term register can be somewhat confusing as it encompasses several aspects of the human voice. The term register can be used to refer to any of the following:
A particular part of the vocal range such as the upper, middle, or lower registers.
A resonance area such as chest voice or head voice.
A phonatory process
A certain vocal timbre
A region of the voice which is defined or delimited by vocal breaks.
A subset of a language used for a particular purpose or in a particular social setting.
In linguistics, a register language is a language which combines tone and vowel phonation into a single phonological system.
Within speech pathology the term vocal register has three constituent elements: a certain vibratory pattern of the vocal folds, a certain series of pitches, and a certain type of sound. Speech pathologists identify four vocal registers based on the physiology of laryngeal function: the vocal fry register, the modal register, the falsetto register, and the whistle register. This view is also adopted by many teachers of singing.
Some voice teachers, however, organize registers differently. There are over a dozen different constructs of vocal registers in use within the field. The confusion which exists concerning what a register is, and how many registers there are, is due in part to what takes place in the modal register when a person sings from the lowest pitches of that register to the highest pitches. The frequency of vibration of the vocal folds is determined by their length, tension, and mass. As pitch rises, the vocal folds are lengthened, tension increases, and their thickness decreases. In other words, all three of these factors are in a state of flux in the transition from the lowest to the highest tones.
If a singer holds any of these factors constant and interferes with their progressive state of change, his laryngeal function tends to become static and eventually breaks occur with obvious changes of tone quality. These breaks are often identified as register boundaries or as transition areas between registers. The distinct change or break between registers is called a passaggio or a ponticello. Vocal instructors teach that with study a singer can move effortlessly from one register to the other with ease and consistent tone. Registers can even overlap while singing. Teachers who like to use this theory of "blending registers" usually help students through the "passage" from one register to another by hiding their "lift" (where the voice changes).
However, many voice instructors disagree with this distinction of boundaries blaming such breaks on vocal problems which have been created by a static laryngeal adjustment that does not permit the necessary changes to take place. This difference of opinion has effected the different views on vocal registration.
Coordination
Singing is an integrated and coordinated act and it is difficult to discuss any of the individual technical areas and processes without relating them to the others. For example, phonation only comes into perspective when it is connected with respiration; the articulators affect resonance; the resonators affect the vocal folds; the vocal folds affect breath control; and so forth. Vocal problems are often a result of a breakdown in one part of this coordinated process which causes voice teachers to frequently focus in, intensively, on one area of the process with their student until that issue is resolved. However, some areas of the art of singing are so much the result of coordinated functions that it is hard to discuss them under a traditional heading like phonation, resonation, articulation, or respiration.
Once the voice student has become aware of the physical processes that make up the act of singing and of how those processes function, the student begins the task of trying to coordinate them. Inevitably, students and teachers will become more concerned with one area of the technique than another. The various processes may progress at different rates, with a resulting imbalance or lack of coordination. The areas of vocal technique which seem to depend most strongly on the student's ability to coordinate various functions are:
Extending the vocal range to its maximum potential
Developing consistent vocal production with a consistent tone quality
Developing flexibility and agility
Achieving a balanced vibrato
Developing the singing voice
Some consider that singing is not a natural process but is a skill that requires highly developed muscle reflexes, but others consider that some ways of singing can be considered as natural. Singing does not require much muscle strength but it does require a high degree of muscle coordination. Individuals can develop their voices further through the careful and systematic practice of both songs and vocal exercises. Voice teachers instruct their students to exercise their voices in an intelligent manner. Singers should be thinking constantly about the kind of sound they are making and the kind of sensations they are feeling while they are singing.
Exercising the singing voice
There are several purposes for vocal exercises, including:
Warming up the voice
Extending the vocal range
"Lining up" the voice horizontally and vertically
Acquiring vocal techniques such as legato, staccato, control of dynamics, rapid figurations, learning to comfortably sing wide intervals, and correcting vocal faults.
Extending the vocal range
An important goal of vocal development is to learn to sing to the natural limits of one's vocal range without any undesired changes of quality or technique. Voice instructors teach that a singer can only achieve this goal when all of the physical processes involved in singing (such as laryngeal action, breath support, resonance adjustment, and articulatory movement) are effectively working together. Most voice teachers believe that the first step in coordinating these processes is by establishing good vocal habits in the most comfortable tessitura of the voice first before slowly expanding the range beyond that.
There are three factors which significantly affect the ability to sing higher or lower:
The Energy Factor – In this usage the word energy has several connotations. It refers to the total response of the body to the making of sound. It refers to a dynamic relationship between the breathing-in muscles and the breathing-out muscles known as the breath support mechanism. It also refers to the amount of breath pressure delivered to the vocal folds and their resistance that pressure, and it refers to the dynamic level of the sound.
The Space Factor – Space refers to the amount of space created by the moving of the mouth and the position of the palate and larynx. Generally speaking, a singer's mouth should be opened wider the higher they sing. The internal space or position of the soft palate and larynx can be widened by the relaxing of the throat. Voice teachers often describe this as feeling like the "beginning of a yawn".
The Depth Factor – In this usage the word depth has two connotations. It refers to the actual physical sensations of depth in the body and vocal mechanism and it refers to mental concepts of depth as related to tone quality.
McKinney says, "These three factors can be expressed in three basic rules: (1) As you sing higher, you must use more energy; as you sing lower, you must use less. (2) As you sing higher, you must use more space; as you sing lower, you must use less. (3) As you sing higher, you must use more depth; as you sing lower, you must use less."
General music studies
Some voice teachers will spend time working with their students on general music knowledge and skills, particularly music theory, music history, and musical styles and practices as it relates to the vocal literature being studied. If required they may also spend time helping their students become better sight readers, often adopting solfège, which assigns certain syllables to the notes of the scale.
Performance skills and practices
Since singing is a performing art, voice teachers spend some of their time preparing their students for performance. This includes teaching their students etiquette of behavior on stage such as bowing, learning to manage stage fright, addressing problems like nervous tics, and the use of equipment such as microphones. Some students may also be preparing for careers in the fields of opera or musical theater where acting skills are required. Many voice instructors will spend time on acting techniques and audience communication with students in these fields of interest. Students of opera also spend a great deal of time with their voice teachers learning foreign language pronunciations.
See also
Human voice
Voice teacher
Throat singing
Notes
Further reading
External links
National Association of Teachers of Singing
Vocapedia, NATS-sponsored comprehensive database on singing and vocal pedagogy
Human voice
Singing
Opera terminology
Vocal skills | 0.773416 | 0.983906 | 0.760969 |
Realism (international relations) | Realism, a school of thought in international relations theory, is a theoretical framework that views world politics as an enduring competition among self-interested states vying for power and positioning within an anarchic global system devoid of a centralized authority. It centers on states as rational primary actors navigating a system shaped by power politics, national interest, and a pursuit of security and self-preservation.
Realism involves the strategic use of military force and alliances to boost global influence while maintaining a balance of power. War is seen as an inevitability inherent in the anarchic conditions of world politics. Realism also emphasizes the complex dynamics of the security dilemma, where actions taken for security reasons can unintentionally lead to tensions between states.
Unlike idealism or liberalism, realism underscores the competitive and conflictual nature of global politics. In contrast to liberalism, which champions cooperation, realism asserts that the dynamics of the international arena revolve around states actively advancing national interests and prioritizing security. While idealism leans towards cooperation and ethical considerations, realism argues that states operate in a realm devoid of inherent justice, where ethical norms may not apply.
Early popular proponents of realism included Thucydides (5th century BCE), Machiavelli (16th century), Hobbes (17th century), and Rousseau (18th century). Carl von Clausewitz (early 19th century), another contributor to the realist school of thought, viewed war as an act of statecraft and gave strong emphasis on hard power. Clausewitz felt that armed conflict was inherently one-sided, where typically only one victor can emerge between two parties, with no peace.
Realism became popular again in the 1930s, during the Great Depression. At that time, it polemicized with the progressive, reformist optimism associated with liberal internationalists like U.S. President Woodrow Wilson. The 20th century brand of classical realism, exemplified by theorists such as Reinhold Niebuhr and Hans Morgenthau, has evolved into neorealism—a more scientifically oriented approach to the study of international relations developed during the latter half of the Cold War. In the 21st century, realism has experienced a resurgence, fueled by escalating tensions among world powers. Some of the most influential proponents of political realism today are John Mearsheimer and Stephen Walt.
Overview
Realists fall into three classes based on their view of the essential causes of conflict between states:
Classical realists believe that conflict follows from human nature.
Neorealists attribute conflict to the dynamics of the anarchic state-system.
Neoclassical realists believe that conflict results from both, in combination with domestic politics. Neoclassical realists are also divided between defensive and offensive realism.
Realism entails a spectrum of ideas, which tend to revolve around several central propositions, such as:
State-centrism: states are the central actors in international politics, rather than leaders or international organizations;
Anarchy: the international political system is anarchic, as there is no supranational authority to enforce rules;
Rationality and/or egoism: states act in their rational self-interest within the international system; and
Power: states desire power to ensure self-preservation.
Political scientists sometimes associate realism with Realpolitik,
as both deal with the pursuit, possession, and application of power. Realpolitik, however, is an older prescriptive guideline limited to policy-making, while realism is a wider theoretical and methodological paradigm which aims to describe, explain, and predict events in international relations. As an academic pursuit, realism is not necessarily tied to ideology; it does not favor any particular moral philosophy, nor does it consider ideology to be a major factor in the behavior of nations.
However, realists are generally critical of liberal foreign policy. Garrett Ward Sheldon has characterised the priorities of realists as Machiavellian and seen them as prioritising the seeking of power, although realists have also advocated the idea that powerful states concede spheres of influence to other powerful states.
Common assumptions
The four propositions of realism are as follows.
State-centrism: States are the most important actors.
Anarchy: The international system is anarchic.
No actor exists above states, capable of regulating their interactions; states must arrive at relations with other states on their own, rather than it being dictated to them by some higher controlling entity.
The international system exists in a state of constant antagonism (anarchy).
Egoism: All states within the system pursue narrow self-interests.
States tend to pursue self-interest.
Groups strive to attain as many resources as possible (relative gain).
Power politics: The primary concern of all states is power and security.
States build up their militaries to survive, which may lead to a security dilemma.
Realists believe that mankind is not inherently benevolent but rather self-centered and competitive. This perspective, which is shared by theorists such as Thomas Hobbes, views human nature as egocentric (not necessarily selfish) and conflictual unless there exist conditions under which humans may coexist. It is also disposed of the notion that an individual's intuitive nature is made up of anarchy. In regards to self-interest, these individuals are self-reliant and are motivated in seeking more power. They are also believed to be fearful. This view contrasts with the approach of liberalism to international relations.
The state emphasises an interest in accumulating power to ensure security in an anarchic world. Power is a concept primarily thought of in terms of material resources necessary to induce harm or coerce other states (to fight and win wars). The use of power places an emphasis on coercive tactics being acceptable to either accomplish something in the national interest or avoid something inimical to the national interest. The state is the most important actor under realism. It is unitary and autonomous because it speaks and acts with one voice. The power of the state is understood in terms of its military capabilities. A key concept under realism is the international distribution of power referred to as system polarity. Polarity refers to the number of blocs of states that exert power in an international system. A multipolar system is composed of three or more blocs, a bipolar system is composed of two blocs, and a unipolar system is dominated by a single power or hegemon. Under unipolarity realism predicts that states will band together to oppose the hegemon and restore a balance of power. Although all states seek hegemony under realism as the only way to ensure their own security, other states in the system are incentivised to prevent the emergence of a hegemon through balancing.
States employ the rational model of decision making by obtaining and acting upon complete and accurate information. The state is sovereign and guided by a national interest defined in terms of power. Since the only constraint of the international system is anarchy, there is no international authority and states are left to their own devices to ensure their own security. Realists believe that sovereign states are the principal actors in the international system. International institutions, non-governmental organizations, multinational corporations, individuals and other sub-state or trans-state actors are viewed as having little independent influence. States are inherently aggressive (offensive realism) and obsessed with security (defensive realism). Territorial expansion is only constrained by opposing powers. This aggressive build-up, however, leads to a security dilemma whereby increasing one's security may bring along even greater instability as an opposing power builds up its own arms in response (an arms race). Thus, security becomes a zero-sum game where only relative gains can be made. Moreover, the "relative gains" notion of the realist school implies that states must fight against each other to gain benefits.
Realists believe that there are no universal principles with which all states may guide their actions. Instead, a state must always be aware of the actions of the states around it and must use a pragmatic approach to resolve problems as they arise. A lack of certainty regarding intentions prompts mistrust and competition between states.
Rather than assume that states are the central actors, some realists, such as William Wohlforth and Randall Schweller refer instead to "groups" as the key actors of interest.
Finally, states are sometimes described as "billiard balls" or "black boxes". This analogy is meant to underscore the secondary importance of internal state dynamics and decisionmaking in realist models, in stark contrast to bureaucratic or individual-level theories of international relations.
Realism in statecraft
The ideas behind George F. Kennan's work as a diplomat and diplomatic historian remain relevant to the debate over American foreign policy, which since the 19th century has been characterized by a shift from the Founding Fathers' realist school to the idealistic or Wilsonian school of international relations. In the realist tradition, security is based on the principle of a balance of power and the reliance on morality as the sole determining factor in statecraft is considered impractical. According to the Wilsonian approach, on the other hand, the spread of democracy abroad as a foreign policy is key and morals are universally valid. During the Presidency of Bill Clinton, American diplomacy reflected the Wilsonian school to such a degree that those in favor of the realist approach likened Clinton's policies to social work. Some argue that in Kennan's view of American diplomacy, based on the realist approach, such apparent moralism without regard to the realities of power and the national interest is self-defeating and may lead to the erosion of power, to America's detriment. Others argue that Kennan, a proponent of the Marshall Plan (which gave out bountiful American aid to post-WW2 countries), might agree that Clinton's aid functioned strategically to secure international leverage: a diplomatic maneuver well within the bounds of political realism as described by Hedley Bull.
Realists often hold that statesmen tend towards realism whereas realism is deeply unpopular among the public. When statesmen take actions that divert from realist policies, academic realists often argue that this is due to distortions that stem from domestic politics. However, some research suggests that realist policies are actually popular among the public whereas elites are more beholden to liberal ideas. Abrahamsen suggested that realpolitik for middle powers can include supporting idealism and liberal internationalism.
Historical branches and antecedents
While realism as a formal discipline in international relations did not arrive until World War II, its primary assumptions have been expressed in earlier writings. Realists trace the history of their ideas back to classical antiquity, beginning with Thucydides ( 5th century BCE).
Historian Jean Bethke Elshtain traces the historiography of realism:
The genealogy of realism as international relations, although acknowledging antecedents, gets down to serious business with Machiavelli, moving on to theorists of sovereignty and apologists for the national interest. It is present in its early modern forms with Hobbes's Leviathan (1651).
Modern realism began as a serious field of research in the United States during and after World War II. This evolution was partly fueled by European war migrants like Hans Morgenthau, whose work Politics Among Nations is considered a seminal development in the rise of modern realism. Other influential figures were George F. Kennan (known for his work on containment), Nicholas Spykman (known for his work on geostrategy and containment), Herman Kahn (known for his work on nuclear strategy) and E. H. Carr.
Classical realism
Classical realism states that it is fundamentally the nature of humans that pushes states and individuals to act in a way that places interests over ideologies. Classical realism is an ideology defined as the view that the "drive for power and the will to dominate [that are] held to be fundamental aspects of human nature".
Prominent classical realists:
E. H. Carr
Hans Morgenthau
Reinhold Niebuhr – Christian realism
Raymond Aron
George Kennan
Liberal realism or the English school of rationalism
The English school holds that the international system, while anarchical in structure, forms a "society of states" where common norms and interests allow for more order and stability than that which may be expected in a strict realist view. Prominent English School writer Hedley Bull's 1977 classic, The Anarchical Society, is a key statement of this position.
Prominent liberal realists:
Hedley Bull – argued for both the existence of an international society of states and its perseverance even in times of great systemic upheaval, meaning regional or so-called "world wars"
Martin Wight
Barry Buzan
Neorealism or structural realism
Neorealism derives from classical realism except that instead of human nature, its focus is predominantly on the anarchic structure of the international system. States are primary actors because there is no political monopoly on force existing above any sovereign. While states remain the principal actors, greater attention is given to the forces above and below the states through levels of analysis or structure and agency debate. The international system is seen as a structure acting on the state with individuals below the level of the state acting as agency on the state as a whole.
While neorealism shares a focus on the international system with the English school, neorealism differs in the emphasis it places on the permanence of conflict. To ensure state security, states must be on constant preparation for conflict through economic and military build-up.
Prominent neorealists:
Robert J. Art – neorealism
Robert Gilpin – hegemonic theory
Robert Jervis – defensive realism
John Mearsheimer – offensive realism
Barry Posen – neorealism
Kenneth Waltz – defensive realism
Stephen Walt – defensive realism
Neoclassical realism
Neoclassical realism can be seen as the third generation of realism, coming after the classical authors of the first wave (Thucydides, Niccolò Machiavelli, Thomas Hobbes) and the neorealists (especially Kenneth Waltz). Its designation of "neoclassical", then, has a double meaning:
It offers the classics a renaissance;
It is a synthesis of the neorealist and the classical realist approaches.
Gideon Rose is responsible for coining the term in a book review he wrote in 1998.
The primary motivation underlying the development of neoclassical realism was the fact that neorealism was only useful to explain political outcomes (classified as being theories of international politics), but had nothing to offer about particular states' behavior (or theories of foreign policy). The basic approach, then, was for these authors to "refine, not refute, Kenneth Waltz", by adding domestic intervening variables between systemic incentives and a state's foreign policy decision. Thus, the basic theoretical architecture of neoclassical realism is:
Distribution of power in the international system (independent variable)
Domestic perception of the system and domestic incentives (intervening variable)
Foreign policy decision (dependent variable)
While neoclassical realism has only been used for theories of foreign policy so far, Randall Schweller notes that it could be useful to explain certain types of political outcomes as well.
Neoclassical realism is particularly appealing from a research standpoint because it still retains a lot of the theoretical rigor that Waltz has brought to realism, but at the same time can easily incorporate a content-rich analysis, since its main method for testing theories is the process-tracing of case studies.
Prominent neoclassical realists:
Aaron Friedberg
Randall Schweller
William Wohlforth
Fareed Zakaria
Realist constructivism
Some see a complementarity between realism and constructivism. Samuel Barkin, for instance, holds that "realist constructivism" can fruitfully "study the relationship between normative structures, the carriers of political morality, and uses of power" in ways that existing approaches do not. Similarly, Jennifer Sterling-Folker has argued that theoretical synthesis helps explanations of international monetary policy by combining realism's emphasis of an anarchic system with constructivism's insights regarding important factors from the domestic level. Scholars such as Oded Löwenheim and Ned Lebow have also been associated with realist constructivism.
Criticisms
Democratic peace
Democratic peace theory advocates also that realism is not applicable to democratic states' relations with each other as their studies claim that such states do not go to war with one another. However, realists and proponents of other schools have critiqued this claim, claiming that its definitions of "war" and "democracy" must be tweaked in order to achieve this result. The interactive model of democratic peace observes a gradual influence of both democracy and democratic difference on wars and militarized interstate disputes. A realist government may not consider it in its interest to start a war for little gain, so realism does not necessarily mean constant battles.
Hegemonic peace and conflict
Robert Gilpin developed the theory of hegemonic stability theory within the realist framework, but limited it to the economic field. Niall Ferguson remarked that the theory has offered insights into the way that economic power works, but neglected the military and cultural aspects of power.
John Ikenberry and Daniel Deudney state that the Iraq War, conventionally blamed on liberal internationalism by realists, actually originates more closely from hegemonic realism. The "instigators of the war", they suggest, were hegemonic realists. Where liberal internationalists reluctantly supported the war, they followed arguments linked to interdependence realism relating to arms control. John Mearsheimer states that "One might think..." events including the Bush Doctrine are "evidence of untethered realism that unipolarity made possible," but disagrees and contends that various interventions are caused by a belief that a liberal international order can transcend power politics.
Inconsistent with non-European politics
Scholars have argued that realist theories, in particular realist conceptions of anarchy and balances of power, have not characterized the international systems of East Asia and Africa (before, during and after colonization).
State-centrism
Scholars have criticized realist theories of international relations for assuming that states are fixed and unitary units.
Appeasement
In the mid-20th century, realism was seen as discredited in the United Kingdom due to its association with appeasement in the 1930s. It re-emerged slowly during the Cold War.
Scholar Aaron McKeil pointed to major illiberal tendencies within realism that, aiming for a sense of "restraint" against liberal interventionism, would lead to more proxy wars, and fail to offer institutions and norms for mitigating great power conflict.
Realism as degenerative research programs
John Vasquez applied Imre Lakatos's criteria, and concluded that realist-based research program is seen as degenerating due to the protean character of its theoretical development, an unwillingness to specify what makes the true theory, a continuous adoption of auxiliary propositions to explain away flaws, and lack of strong research findings. Against Vasquez, Stephen Walt argued that Vasquez overlooked the progressive power of realist theory. Kenneth Waltz claimed that Vasquez misunderstood Lakatos.
Abstract theorizing and non-consensus moral principles
The mainstream version of realism is criticized for abstract theorizing at the expense of historical detail and for a non-consensus foundation of the moral principles of the "rules of international conduct"; as evidenced in the case of Russian invasion of Ukraine.
See also
Complex interdependence
Consensus reality
Consequentialism
International legal theory
Game theory
Global justice
Legalism (Chinese philosophy)
Might makes right
Negarchy
Peace through strength
Realpolitik
Moral nihilism
Deterrence theory
References
Further reading
Ashley, Richard K. "Political Realism and the Human Interests", International Studies Quarterly (1981) 25: 204–36.
Barkin, J. Samuel Realist Constructivism: Rethinking International Relations Theory (Cambridge University Press; 2010) 202 pages. Examines areas of both tension and overlap between the two approaches to IR theory.
Bell, Duncan, ed. Political Thought and International Relations: Variations on a Realist Theme. Oxford: Oxford University Press, 2008.
Booth, Ken. 1991. "Security in anarchy: Utopian realism in theory and practice", International Affairs 67(3), pp. 527–545
Crawford; Robert M. A. Idealism and Realism in International Relations: Beyond the Discipline (2000) online edition
Donnelly; Jack. Realism and International Relations (2000) online edition
Gilpin, Robert G. "The richness of the tradition of political realism", International Organization (1984), 38:287–304
Griffiths; Martin. Realism, Idealism, and International Politics: A Reinterpretation (1992) online edition
Guilhot Nicolas, ed. The Invention of International Relations Theory: Realism, the Rockefeller Foundation, and the 1954 Conference on Theory (2011)
Keohane, Robert O., ed. Neorealism and its Critics (1986)
Lebow, Richard Ned. The Tragic Vision of Politics: Ethics, Interests and Orders. Cambridge: Cambridge University Press, 2003.
Mearsheimer, John J., "The Tragedy of Great Power Politics." New York: W.W. Norton & Company, 2001. [Seminal text on Offensive Neorealism]
Meyer, Donald. The Protestant Search for Political Realism, 1919–1941 (1988) online edition
Molloy, Sean. The Hidden History of Realism: A Genealogy of Power Politics. New York: Palgrave, 2006.
Morgenthau, Hans. "Scientific Man versus Power Politics" (1946) Chicago, IL: University of Chicago Press.
"Politics Among Nations: The Struggle for Power and Peace" (1948) New York NY: Alfred A. Knopf.
"In Defense of the National Interest" (1951) New York, NY: Alfred A. Knopf.
"The Purpose of American Politics" (1960) New York, NY: Alfred A. Knopf.
Murray, A. J. H., Reconstructing Realism: Between Power Politics and Cosmopolitan Ethics. Edinburgh: Keele University Press, 1997.
Rösch, Felix. "Unlearning Modernity. A Realist Method for Critical International Relations?." Journal of International Political Theory 13, no. 1 (2017): 81–99.
Rosenthal, Joel H. Righteous Realists: Political Realism, Responsible Power, and American Culture in the Nuclear Age. (1991). 191 pp. Compares Reinhold Niebuhr, Hans J. Morgenthau, Walter Lippmann, George F. Kennan, and Dean Acheson
Scheuerman, William E. 2010. "The (classical) Realist vision of global reform." International Theory 2(2): pp. 246–282.
Schuett, Robert. Political Realism, Freud, and Human Nature in International Relations. New York: Palgrave, 2010.
Smith, Michael Joseph. Realist Thought from Weber to Kissinger (1986)
Tjalve, Vibeke S. Realist Strategies of Republican Peace: Niebuhr, Morgenthau, and the Politics of Patriotic Dissent. New York: Palgrave, 2008.
Williams, Michael C. The Realist Tradition and the Limits of International Relations. Cambridge: Cambridge University Press, 2005. online edition
External links
Political Realism in International Relations in Stanford Encyclopedia of Philosophy
Richard K. Betts, "Realism", YouTube
Political realism
International relations theory | 0.762249 | 0.998315 | 0.760965 |
Community organization | Community organization or community based organization refers to organization aimed at making desired improvements to a community's social health, well-being, and overall functioning. Community organization occurs in geographically, psychosocially, culturally, spiritually, and digitally bounded communities.
Community organization includes community work, community projects, community development, community empowerment, community building, and community mobilization. It is a commonly used model for organizing community within community projects, neighborhoods, organizations, voluntary associations, localities, and social networks, which may operate as ways to mobilize around geography, shared space, shared experience, interest, need, and/or concern.
Introduction
Community organization is differentiated from conflict-oriented community organizing, which focuses on short-term change through appeals to authority (i.e., pressuring established power structures for desired change), by focusing on long-term and short-term change through direct action and the organizing of community (i.e., the creation of alternative systems outside of established power structures). This often includes inclusive networking, interpersonal organizing, listening, reflexivity, non-violent communication, cooperation, mutual aid and social care, prefiguration, popular education, and direct democracy.
Within organizations, variations exist in terms of size and structure. Some are formally incorporated, with codified bylaws and Boards of Directors (also known as a committee), while others are much smaller, more informal, and grassroots. Community organization may be more effective in addressing need as well as in achieving short-term and long-term goals than larger, more bureaucratic organizations. Contemporary community organization, known as "The New Community Organizing", includes glocalized perspectives and organizing methods. The multiplicity of institutions, groups, and activities do not necessarily define community organization. However, factors such as the interaction, integration, and coordination of, existing groups, assets, activities, as well as the relationships, the evolution of new structures and communities, are characteristics unique to community organization.
Community organization may often lead to greater understanding of community contexts. It is characterized by community building, community planning, direct action and mobilization, the promotion of community change, and, ultimately, changes within larger social systems and power structures along with localized ones.
Community organization generally functions within not-for-profit efforts, and funding often goes directly toward supporting organizing activities. Under globalization, the ubiquity of ICTs, neoliberalism, and austerity, has caused many organizations to face complex challenges such as mission drift and coercion by state and private funders. These political and economic conditions have led some to seek alternative funding sources such as fee-for-service, crowd funding, and other creative avenues.
Definitions
The United Nations in 1955 considered community organization as complementary to community development. The United Nations assumed that community development is operative in marginalized communities and community organization is operative in areas in where levels of living are relatively high and social services relatively well developed, but in where a greater degree of integration and community initiative is recognized as desirable.
In 1955, Murray G. Ross defined community organization as a process by which a community identifies its needs or objectives, orders (or ranks) these needs or objectives, develops the confidence and will to work at these needs or objectives, finds the resources (internal and/or external) to deal with these needs or objectives, takes action in respect to them, and in so doing, extends and develops co-operative and collaborative attitudes and practices within the community.
In 1921, Eduard C. Lindeman defined community organization as "that phase of social organization which constitutes a conscious effort on the part of a community to control its affairs democratically and to secure the highest services from its specialists, organizations, agencies, and institutions by means of recognized interrelations."
In 1925, Walter W. Pettit stated that "Community organization is perhaps best defined as assisting a group of people to recognize their common needs and helping them to meet these needs."
In 1940, Russell H. Kurtz defined community organization as "a process dealing primarily with program relationships and thus to be distinguished in its social work setting from those other basic processes, such as casework and group work. Those relationships of agency to agency, of agency to community and of community to agency reach in all directions from any focal point in the social work picture. Community organization may be thought of as the process by which these relationships are initiated, altered or terminated to meet changing conditions, and it is thus basic to all social work..."
In 1947, Wayne McMillen defined community organization as "in its generic sense in deliberately directed effort to assist groups in attaining unity of purpose and action. It is practiced, though often without recognition of its character, wherever the objective is to achieve or maintain a pooling of the talents and resources of two or more groups in behalf of either general or specific objectives."
In 1954, C. F. McNeil said "Community organization for social welfare is the process by which the people of community, as individual citizens or as representatives of groups, join together to determine social welfare needs, plan ways of meeting then and mobilise the necessary resource."
In 1967, Murray G. Ross defined community organization as a process by which a community identifies needs or objectives, takes action, and through this process, develops cooperative and collaborative attitudes and practices within a community.
In 1975, Kramer and Specht stated "Community organization refers to various methods of intervention whereby a professional change agent helps a community action system composed of individuals, groups, or organizations to engage in planned collective action in order to deal with special problems within the democratic system of values."
Comparison between related terms
Community organization and community development are interrelated, and both have their roots in community social work. To achieve the goals of community development the community organization method is used. According to United Nations, community development deals with total development of a developing country, including economic, physical, and social aspects. For achieving total development, community organization is used. In community development the aspects like democratic procedures, voluntary cooperation, self-help, development of leadership, awareness and sensitisation are considered as important. The same aspects are also considered as important by community organization.
History
Informal associations of people focused on the common good have existed in most societies. The first formal precursor to the Community Benefit Organization was recorded in Elizabethan England to overcome the acute problem of poverty, which led to beggary. In England, Elizabethan poor law (1601) was set up to provide services to the needy. The London Society of Organizing Charitable Relief and Repressing Mendicancy and the settlement house movement followed in England during the late 1800s.
This model of community organizing was carried into the United States of America. In 1880, the Charities organization was set up to put rational order in the area of charity and relief. The first citywide Charity Organization Society (COS) was established in Buffalo, New York, US, in 1877. Rev. S. H. Gurteen, an English priest who had moved to Buffalo in 1873, gave led COS to outreach in more than 25 American cities. The American Association for Community Organization was organized in 1918 as the national agency for chests and councils and it later became known as community chests and councils of America (CCC). The Cincinnati Public Health Federation, established in 1917, was the first independent health council in an American city.
In 1946, at the National Conference of Social Work met in Buffalo, where the Association of the Study of Community Organization (ASCO) was organized. The main objective was to improve the professional practice of organization for social welfare. In 1955, ASCO merged with six other professional organisations to form the National Association of Social Workers. The Settlement movement and "settlement houses" are historically significant examples of community organizations, participating in both organizing and development at the neighborhood level. Settlement houses were commonly located in the industrial cities of the East and Midwest during the beginning of the 20th century; Jane Addams' Hull House in Chicago, Illinois, was a notable example. They were largely established in working-class neighborhoods by the college educated children of middle class citizens concerned by the substantial social problems that were the results of the increasing industrialization and urbanization of the social settlement movement. History shows that innovative methods of community organizing have risen in response to vast social problems. The social problems at the time of settlement houses included child labor, working class poverty, and housing. Settlement workers thought that by providing education services (English classes) and social services (employment assistance, legal aid, recreational programs, children services) to the poor the income gap between them and the middle class would regress. The majority of funding for services came from charitable resources.
Another development in the history of American community development occurred in the wake of World War II. Of prime importance were the American Red Cross and United Service Organizations (USO), which recruited an immense number of people for volunteer services during the war. After World War II, the focus of community organization fell onto rising problems like rehabilitation of the physically and mentally challenged, mental health planning, destitution, abandoned aging population, juvenile delinquency, etc.
The historical development of community organization in the UK is divided into four phases, according to Baldock in 1974:
First Phase (1880-1920): During this period community work was mainly seen as a method of social work. It was considered a process of helping individuals enhance their social adjustments. It acted as major player to co-ordinate the work of voluntary agencies.
Second phase (1920-1950): This period saw the emergence of new ways of dealing with social issues and problems. The community organization was closely associated with central and state government programs for urban development. The important development in this period was its association with the community association movement.
Third phase (1950 onwards): This period emerged as a reaction to the neighborhood idea, which provided an ideological phase for the second phase. The professional development of social work took place during this period. Understanding the shortcomings in the existing system, it was a period where the social workers sought for a professional identity.
Fourth phase: The ongoing period that has marked a significant involvement of the community action. It questioned the very relationship of the community work and social work. It was thus seen as period of radical social movement and we could see the conflicts of community with authority. The association of social workers and the community are deprofessionalized during this period. Thus it was during this period the conflictual strategies that were introduced in the community work.
Categories
Typically community organizations fall into the following categories: community-service and action, health, educational, personal growth and improvement, social welfare and self-help for the disadvantaged.
Community-based organizations (CBOs) which operates within the given locality insures the community with sustainable provision of community-service and actions in health, educational, personal growth and improvement, social welfare and self-help for the disadvantaged its sustainability becomes healthier and possible because the community is directly involved in the action or operation wherever and whenever monetary and non-monetary support or contribution is generated. Amateur sports clubs, school groups, church groups, youth groups and community support groups are all typical examples of community organizations.
In developing countries (like those in Sub-Saharan Africa) community organizations often focus on community strengthening, including HIV/AIDS awareness, human rights (like the Karen Human Rights Group), health clinics, orphan children support, water and sanitation provision, and economic issues. Somewhere else social animators are also concentrating on uncommon issues, like Chengara struggle, Kerala, India and Ghosaldanga Adivasi Seva Sangh which is reported in West Bengal, India.
Models
In 1970, Jack Rothman formulated three basic models of community organization.
Locality Development - A method of working with community organizations. Initially used by the Settlement House movement, the primary focus was community building and community empowerment. Leadership development, mutual aid, and popular education were considered essential components to this participatory process. Locality development is aimed at meeting the needs of target populations in a defined area (e.g., neighborhood, housing block, tenement housing, school, etc.).
Social Planning - A method of working with a large population. The focus is in evaluating welfare needs and existing services in the area and planning a possible blue print for a more efficient delivery of services to the social problems. It is a responsive model to the needs and attitudes of the community. E.g. Housing, health insurance, affordable education, etc.
Social Action - A strategy used by groups, sub communities, or even national organisations that feel that they have inadequate power and resources to meet their needs. They confront the dominant power structure using conflict as a method to solve their issues related to inequalities and deprivation. E.g. A structural systems change in social policies that brings disparities between people of different socioeconomic conditions in social rights like educational policies, employment policies, etc.
In the late 1990s, Rothman revisited the three community organization typologies of locality development, social planning, and social action, and reflected that they were too rigid as "community processes had become more complex and variegated, and problems had to be approached differently, more subtly, and with greater penetrability." This led to a broadened view of the models as more expansive, nuanced, situational, and interconnected. According to Rothman, the reframing of the typologies as overlapping and integrated ensured that "practitioners of any stripe [have] a greater range in selecting, then mixing and phasing, components of intervention."
Rothman's three basic models of community organization have been critiqued and expanded upon. Feminist community organization scholar, Cheryl Hyde, criticized Rothman's "mixing and phasing" as unable to transcend rigid categorical organizing typologies, as they lacked "dimensions of ideology, longitudinal development ... commitment within community intervention and incorporati[on] [of] social movement literature."
Principles
Principles are expressions of value judgments. It is the generalized guiding rules for a sound practice. Arthur Dunham in 1958 formulated a statement of 28 principles of community organisation and grouped those under seven headings. They are:
Democracy and social welfare;
Community roots for community programs;
Citizen understanding, support, and participation and professional service;
Co-operation;
Social Welfare Programs;
Adequacy, distribution, and organisation of social welfare services; and
Prevention.
In India, Siddiqui in 1997 worked out a set of principles based on the existing evidence based indigenous community organization practices.
Objective movement
Specific planning
Active peoples participation
Inter-group approach
Democratic functioning
Flexible organisation
Utilisation of available resources
Cultural orientation
Impact of globalization
Globalization is fundamentally changing the landscape of work, organizations, and community. Many of the challenges created by globalization involve divestment from local communities and neighborhoods, and a changing landscape of work. Paired with the transition to post-industrialization, both challenges and opportunities for grassroots community organizations are growing. Scholars such as Grace Lee Boggs and Gar Alperovitz are noted for their visionary understandings of community organization in this changing context. At the core of these understandings is the acknowledgement that "communities" exist in the context of local, national, and global influences. These and other scholars emphasize the need to create new social, economic, and political systems through community organization, as a way to rebuild local wealth in this changing landscape. Related concepts include visionary organizing, community wealth projects, employee-owned firms, anchor institutions, and place-based education.
In the era of globalization smaller community organizations typically rely on donations (monetary and in-kind) from local community members and sponsorship from local government and businesses. In Canada, for example, slightly over 40% of the community organizations surveyed had revenue under C$30,000. These organizations tend to be relationship-based and people-focused. Across all sizes, Canadian community organizations rely on government funding (49%), earned income (35%), and others through gifts and donations (13%).
See also
Notes
Further reading
Cox, F.M. et al. (Ed). (1987): Strategies of Community Organization: A book of Readings, 4th ed. Itasca, 12, FE Peacock.
J. Phillip Thompson (2005). Seeking Effective Power: Why Mayors Need Community Organizations. Perspectives on Politics, 3, pp 301–308.
Jack Rothman (2008). Strategies of Community Intervention. Eddie Bowers Publishing Co.
Siddiqui, H.Y. (1997). "Working with Communities". Hira Publications, New Delhi.
Hardcastle, D. & Powers, P. (2011). Community practice: Theories and skills for social workers. Oxford University Press. New York.
Ledwith, M. (2005), Community Development
Murray G. Ross (1955). Community Organization. Harper and Row Publishers. New York.
Herbert J. Ruhim and Irene S. Ruhim 2001, Community organising and development, Allyn and Bacon, Massachusetts.
Roger Hadlye, Mike Cooper, 1987, A Community social worker's handbook, Tavistock publication, London.
Michael Jacoby Brown (2007). Building Powerful Community Organizations. Long Haul Press.
Harper E.B. and Dunham, Arthur (1959), Community Organisation in Action, Association Press, New York.
External links
Community intervention
ISHR (nd), Project and Organizational Development for NGOs and CBOs, New York: Columbia University
Community
Community organizing
Social work
Types of organization
Welfare and service organizations | 0.767146 | 0.991934 | 0.760958 |
Cultural communication | Cultural communication is the practice and study of how different cultures communicate within their community by verbal and nonverbal means. Cultural communication can also be referred to as intercultural communication and cross-cultural communication. Cultures are grouped together by a set of similar beliefs, values, traditions, and expectations which call all contribute to differences in communication between individuals of different cultures. Cultural communication is a practice and a field of study for many psychologists, anthropologists, and scholars. The study of cultural communication is used to study the interactions of individuals between different cultures. Studies done on cultural communication are utilized in ways to improve communication between international exchanges, businesses, employees, and corporations. Two major scholars who have influenced cultural communication studies are Edward T. Hall and Geert Hofstede. Edward T. Hall, who was an American anthropologist, is considered to be the founder of cultural communication and the theory of proxemics. The theory of proxemics focuses on how individuals use space while communicating depending on cultural backgrounds or social settings. The space in between individuals can be identified in four different ranges. For example, 0 inches signifies intimate space while 12 feet signifies public space. Geert Hofstede was a social psychologist who founded the theory of cultural dimension. In his theory, there are five dimensions that aim to measure differences between different cultures. The five dimensions are power distance, uncertainty avoidance, individualism versus collectivism, masculinity versus femininity, and Chronemics.
Overview
Intracultural miscommunication draws on the fact that all humans subconsciously reflect their cultural backgrounds in day to day communication. Culture does not just lie in the way one eats or dresses, but in the manner in which people present themselves as an entity to the outside world. Language is a huge proponent of communication, as well as a large representation of one's cultural background. Cultural miscommunication often stems from different and conflicting styles of speech and messages. A perfectly normal intonation pattern for a native German speaker may seem angry and aggressive to a foreign listener. Connotations of words, as well as meanings of slang phrases, vary greatly across cultural lines, and a lack of tolerance and understanding of this fact often results in misinterpretations.
Non-Verbal Communication
Non-verbal communication is different cross culturally and one must take the time to study different cultures so as to fully understand the messages being transmitted because 70% of communication is not verbal, while only 30% is verbal. Different aspects of non-verbal communication can include facial expressions (happy, sad, angry, confused), which are interpreted differently around the world, eye contact (direct, no eye contact), body language (slouching, arm positioning, leg positioning, rocking motion, standing still), gestures (hand gestures, small gestures, big gestures, no gestures), touching (reaching out to someone, touching arm), and proxemics (distance between each other). Just how verbal language is different in every culture, non-verbal language is different as well. All aspects of language are culturally influenced based on what you observed and experienced when growing up, which is different in different parts of the world. Being able to combine the meanings of what is communicated verbally and non-verbally will give people the ability to fully understand what is occurring in an interaction with someone. Facial expressions can be useful in showing peoples emotions while they are talking, or even while they are not saying anything. Knowing what different emotions look like as facial expressions will help people in understanding what is being communicated to them without the use of words.
An example that can be used to explain how different non-verbal communication is in different areas of the world is eye contact. In the West, eye contact is used as a way of showing where your attention is, along with as a sign of being respectful to who is talking to you. In some Western societies, eye contact can be seen as confrontational. The meanings of the various aspects of non-verbal communication are different cross-culturally in different societies and areas of the world. Differences in non-verbal communication can cause cultural miscommunication if you aren't educated on the practices of another culture when visiting, or talking to someone from that culture.
Power Distance
Power distance is a cultural theory that measures how individuals in cultures view the unequal balance of power. Power distance can be divided into two concepts which are high power distance and low power distance. High power distance refers to a culture in which people of certain societal status have higher power and are revered and respected for having that power. In high power distance cultures, individuals who are considered to have higher power are given great deference and respect by those considered to have lower power, and they are often treated with great privilege in society. In low power distance cultures, those considered with high power such as managers or owners may try to level themselves with those considered lower power such as employees or interns by interacting with them and getting opinions on certain matters since the distribution of power is expected to be more equal. Power distance can be measured by the Power Distance Index. This index measures the degrees of inequality between different cultures. This scale ranges from 0 which would be considered low power distance cultures to 100 which is considered high power distance cultures. According to the index created by Geert Hofstede countries ranking higher on the power index scale are the Philippines, Venezuela, India, France, and Belgium. The countries that rank lower on the scale are Canada, Sweden, the United States, Norway, and Finland. Power distance has been studied in various ways by scholars, psychologists, and communication experts. A study was done by multiple communication experts from across the globe to show how power distance has an effect on voice tone variation and projection among different cultures. The study showed that individuals in a lower power distance culture had a negative reaction to lower voice levels than in high power distance cultures. The study also shows that voice control in those who have higher-level positions has an effect on an individual's power distance on beliefs towards, employees’ work attitudes, and work performance. Louder projection and certain tones have negative impacts on employees in low power distance cultures while those same projections and tones are normal for those in high power distance cultures. Another study was conducted to show the difference in justice perceptions such as work procedures and work interactions among employees and those in managerial positions. This study showed that Chinese employees (high power distance culture) react less negatively to criticism from those in managerial positions than American employees (low power distance culture). Americans expressed more frustration and negative justice perceptions than the Chinese employees. It is reasoned these findings are because China is considered to be a high power distance culture, so for individuals from a high power distance cultures tolerance is higher for inequality while the United States has a lower tolerance for inequality and those in a higher power. Power distance can be studied in a wide variety of ways to show how different cultures react to different levels of power. Travelers, businesses, employees, managers, and corporations use these studies to better understand how to communicate with different cultures in correct and appropriate ways.
Individualism versus collectivism
Knowing how different cultures interact through language allows for cultural awareness and understanding. A major aspect of cultural communication is individualism versus collectivism. People in individualistic cultures value independence and tend to focus on those closest to them. People in collectivistic cultures think more as a group rather than as a single person. Individuals in individualistic cultures value their own wants, needs, and goals while individuals in collectivistic cultures value the wants, needs, and goals of the group above their own individual needs. Geert Hofstede who created the dimensions of national culture, conducted a study to determine the different cultural preferences of various nations to see where exactly countries sit on a scale. The scale ranges from 0 being a strong collectivistic country to 100 being a strong individualistic country. The scale also showed that the countries considered to be closer to 100 on the scale are statistically connected to the country's wealth. Countries considered to be high individualistic cultures are the US, Canada, Australia, and the United Kingdom. Countries considered to be low individualistic cultures are Guatemala, Ecuador, Panama, and Columbia. The countries listed here are considered to be poorer and the countries listed earlier are considered to be more affluent. Usually, societies and cultures that have a lot of freedom are considered to be individualistic, in these cultures people are expected to take care of and worry about themselves and look after their own families. In collectivistic cultures, individuals are expected to look after their entire group, village, or community rather than only looking after themselves. In collectivistic cultures, individuals see themselves as part of a collective and link themselves into groups and prioritize their groups’ goals over their own goals. While individualistic cultures can be a part of groups these individuals separate themselves from the group and consider themselves to be more independent from the group. Those in individualistic cultures think in terms of “me and I.” While those in collectivistic cultures think in terms of “we.” Both individualistic and collectivistic cultures involve how they work in groups and how they prioritize relationships and goals. Psychologists, scholars, and communication experts utilize the differences between cultures and individualistic versus collectivistic cultures to better understand language and the different dynamics of cultures.
References
Iowa State University. International Community Resources. Iowa State Study Abroad Center, October 19, 2011. October 30, 2011.
"Hofestede's Cultural Dimensions." Professional Translation Services | Interpreters | Intercultural Communication & Training. Web. 31 Oct. 2011. <https://web.archive.org/web/20130704015828/http://www.kwintessential.co.uk/intercultural/dimensions.html>.
"Cross-Cultural Communication." University of Colorado Boulder. Web. 31 Oct. 2011. <https://web.archive.org/web/20111115182159/http://www.colorado.edu/conflict/peace/treatment/xcolcomm.htm>.
Cultural Communication Practices Paper Proposal
Cultural studies | 0.774739 | 0.982153 | 0.760912 |
Anaphora (linguistics) | In linguistics, anaphora is the use of an expression whose interpretation depends upon another expression in context (its antecedent). In a narrower sense, anaphora is the use of an expression that depends specifically upon an antecedent expression and thus is contrasted with cataphora, which is the use of an expression that depends upon a postcedent expression. The anaphoric (referring) term is called an anaphor. For example, in the sentence Sally arrived, but nobody saw her, the pronoun her is an anaphor, referring back to the antecedent Sally. In the sentence Before her arrival, nobody saw Sally, the pronoun her refers forward to the postcedent Sally, so her is now a cataphor (and an anaphor in the broader, but not the narrower, sense). Usually, an anaphoric expression is a pro-form or some other kind of deictic (contextually dependent) expression. Both anaphora and cataphora are species of endophora, referring to something mentioned elsewhere in a dialog or text.
Anaphora is an important concept for different reasons and on different levels: first, anaphora indicates how discourse is constructed and maintained; second, anaphora binds different syntactical elements together at the level of the sentence; third, anaphora presents a challenge to natural language processing in computational linguistics, since the identification of the reference can be difficult; and fourth, anaphora partially reveals how language is understood and processed, which is relevant to fields of linguistics interested in cognitive psychology.
Nomenclature and examples
The term anaphora is actually used in two ways.
In a broad sense, it denotes the act of referring. Any time a given expression (e.g. a pro-form) refers to another contextual entity, anaphora is present.
In a second, narrower sense, the term anaphora denotes the act of referring backwards in a dialog or text, such as referring to the left when an anaphor points to its left toward its antecedent in languages that are written from left to right. Etymologically, anaphora derives from Ancient Greek ἀναφορά (anaphorá, "a carrying back"), from ἀνά (aná, "up") + φέρω (phérō, "I carry"). In this narrow sense, anaphora stands in contrast to cataphora, which sees the act of referring forward in a dialog or text, or pointing to the right in languages that are written from left to right: Ancient Greek καταφορά (kataphorá, "a downward motion"), from κατά (katá, "downwards") + φέρω (phérō, "I carry"). A pro-form is a cataphor when it points to its right toward its postcedent. Both effects together are called either anaphora (broad sense) or less ambiguously, along with self-reference they comprise the category of endophora.
Examples of anaphora (in the narrow sense) and cataphora are given next. Anaphors and cataphors appear in bold, and their antecedents and postcedents are underlined:
Anaphora (in the narrow sense, species of endophora)
a. Susan dropped the plate. It shattered loudly. – The pronoun it is an anaphor; it points to the left toward its antecedent the plate.
b. The music stopped, and that upset everyone. – The demonstrative pronoun that is an anaphor; it points to the left toward its antecedent The music stopped.
c. Fred was angry, and so was I. – The adverb so is an anaphor; it points to the left toward its antecedent angry.
d. If Sam buys a new bike, I shall do it as well. – The verb phrase do it is an anaphor; it points to the left toward its antecedent buys a new bike.
Cataphora (included in the broad sense of anaphora, species of endophora)
a. Because he was very cold, David put on his coat. – The pronoun he is a cataphor; it points to the right toward its postcedent David.
b. Although Sam might do so, I shall not buy a new bike. – The verb phrase do so is a cataphor; it points to the right toward its postcedent buy a new bike.
c. In their free time, the boys play video games. – The possessive adjective their is a cataphor; it points to the right toward its postcedent the boys.
A further distinction is drawn between endophoric and exophoric reference. Exophoric reference occurs when an expression, an exophor, refers to something that is not directly present in the linguistic context, but is rather present in the situational context. Deictic pro-forms are stereotypical exophors, e.g.
Exophora
a. This garden hose is better than that one. – The demonstrative adjectives this and that are exophors; they point to entities in the situational context.
b. Jerry is standing over there. – The adverb there is an exophor; it points to a location in the situational context.
Exophors cannot be anaphors as they do not substantially refer within the dialog or text, though there is a question of what portions of a conversation or document are accessed by a listener or reader with regard to whether all references to which a term points within that language stream are noticed (i.e., if you hear only a fragment of what someone says using the pronoun her, you might never discover who she is, though if you heard the rest of what the speaker was saying on the same occasion, you might discover who she is, either by anaphoric revelation or by exophoric implication because you realize who she must be according to what else is said about her even if her identity is not explicitly mentioned, as in the case of homophoric reference).
A listener might, for example, realize through listening to other clauses and sentences that she is a Queen because of some of her attributes or actions mentioned. But which queen? Homophoric reference occurs when a generic phrase obtains a specific meaning through knowledge of its context. For example, the referent of the phrase the Queen (using an emphatic definite article, not the less specific a Queen, but also not the more specific Queen Elizabeth) must be determined by the context of the utterance, which would identify the identity of the queen in question. Until further revealed by additional contextual words, gestures, images or other media, a listener would not even know what monarchy or historical period is being discussed, and even after hearing her name is Elizabeth does not know, even if an English-UK Queen Elizabeth becomes indicated, if this queen means Queen Elizabeth I or Queen Elizabeth II and must await further clues in additional communications. Similarly, in discussing 'The Mayor' (of a city), the Mayor's identity must be understood broadly through the context which the speech references as general 'object' of understanding; is a particular human person meant, a current or future or past office-holder, the office in a strict legal sense, or the office in a general sense which includes activities a mayor might conduct, might even be expected to conduct, while they may not be explicitly defined for this office.
In generative grammar
The term anaphor is used in a special way in the generative grammar tradition. Here it denotes what would normally be called a reflexive or reciprocal pronoun, such as himself or each other in English, and analogous forms in other languages. The use of the term anaphor in this narrow sense is unique to generative grammar, and in particular, to the traditional binding theory. This theory investigates the syntactic relationship that can or must hold between a given pro-form and its antecedent (or postcedent). In this respect, anaphors (reflexive and reciprocal pronouns) behave very differently from, for instance, personal pronouns.
Complement anaphora
In some cases, anaphora may refer not to its usual antecedent, but to its complement set. In the following example a, the anaphoric pronoun they refers to the children who are eating the ice-cream. Contrastingly, example b has they seeming to refer to the children who are not eating ice-cream:
a. Only a few of the children ate their ice-cream. They ate the strawberry flavor first. – They meaning the children who ate ice-cream
b. Only a few of the children ate their ice-cream. They threw it around the room instead. – They meaning either the children who did not eat ice-cream or perhaps the children who did not eat ice-cream and some of those who ate ice-cream but did not finish it or who threw around the ice-cream of those who did not eat it, or even all the children, those who ate ice-cream throwing around part of their ice-cream, the ice-cream of others, the same ice-cream which they may have eaten before or after throwing it, or perhaps only some of the children so that they does not mean to be all-inclusive
In its narrower definition, an anaphoric pronoun must refer to some noun (phrase) that has already been introduced into the discourse. In complement anaphora cases, however, the anaphor refers to something that is not yet present in the discourse, since the pronoun's referent has not been formerly introduced, including the case of 'everything but' what has been introduced. The set of ice-cream-eating-children in example b is introduced into the discourse, but then the pronoun they refers to the set of non-ice-cream-eating-children, a set which has not been explicitly mentioned.
Both semantic and pragmatics considerations attend this phenomenon, which following discourse representation theory since the early 1980s, such as work by Kamp (1981) and Heim (File Change Semantics, 1982), and generalized quantifier theory, such as work by Barwise and Cooper (1981), was studied in a series of psycholinguistic experiments in the early 1990s by Moxey and Sanford (1993) and Sanford et al. (1994). In complement anaphora as in the case of the pronoun in example b, this anaphora refers to some sort of complement set (i.e. only to the set of non-ice-cream-eating-children) or to the maximal set (i.e. to all the children, both ice-cream-eating-children and non-ice-cream-eating-children) or some hybrid or variant set, including potentially one of those noted to the right of example b. The various possible referents in complement anaphora are discussed by Corblin (1996), Kibble (1997), and Nouwen (2003). Resolving complement anaphora is of interest in shedding light on brain access to information, calculation, mental modeling, communication.
Anaphora resolution – centering theory
There are many theories that attempt to prove how anaphors are related and trace back to their antecedents, with centering theory (Grosz, Joshi, and Weinstein 1983) being one of them. Taking the computational theory of mind view of language, centering theory gives a computational analysis of underlying antecedents. In their original theory, Grosz, Joshi, & Weinstein (1983) propose that some discourse entities in utterances are more "central" than others, and this degree of centrality imposes constraints on what can be the antecedent.
In the theory, there are different types of centers: forward facing, backwards facing, and preferred.
Forward facing centers
A ranked list of discourse entities in an utterance. The ranking is debated, some focusing on theta relations (Yıldırım et al. 2004) and some providing definitive lists.
Backwards facing center
The highest ranked discourse entity in the previous utterance.
Preferred center
The highest ranked discourse entity in the previous utterance realised in the current utterance.
See also
– A phenomenon sometimes viewed as modal or temporal anaphora
Notes
Literature
Bussmann, H., G. Trauth, and K. Kazzazi 1998. Routledge dictionary of language and linguistics. Taylor and Francis.
Chomsky, N. 1981/1993. Lectures on government and binding: The Pisa lectures. Mouton de Gruyter.
Corblin, F. 1996. "Quantification et anaphore discursive: la reference aux comple-mentaires". Linguages. 123, 51–74.
Grosz, Barbara J.; Joshi, Aravind K.; and Weinstein, Scott (1983). "Providing a unified account of definite noun phrases in discourse". In Proceedings, 21st Annual Meeting of the Association of Computational Linguistics. 44–50.
Kibble, R. 1997. "Complement anaphora and dynamic binding". In Proceedings from Semantics and Linguistic Theory VII, ed. A. Lawson, 258–275. Ithaca, New York: Cornell University.
McEnery, T. 2000. Corpus-based and computational approaches to discourse anaphora. John Benjamins.
Moxey, L. and A. Sanford 1993. Communicating quantities: A psychological perspective. Laurence Erlbaum Associates.
Nouwen, R. 2003. "Complement anaphora and interpretation". Journal of Semantics, 20, 73–113.
Sanford, A., L. Moxey and K. Patterson 1994. "Psychological studies of quantifiers". Journal of Semantics 11, 153–170.
Schmolz, H. 2015. Anaphora Resolution and Text Retrieval. A Linguistic Analysis of Hypertexts. De Gruyter.
Tognini-Bonelli, E. 2001. Corpus linguistics at work. John Benjamins.
Yıldırım, Savaş & Kiliçaslan, Yilmaz & Erman Aykaç, R. 2004. A Computational Model for Anaphora Resolution in Turkish via Centering Theory: an Initial Approach. 124–128.
External links
What is anaphora?
Pragmatics
Semantics
Semiotics
Syntactic relationships
Generative syntax
Syntax
Tasks of natural language processing
Formal semantics (natural language) | 0.767419 | 0.9915 | 0.760896 |
Communication design | Communication design is a mixed discipline between design and information-development concerned with how media communicate with people. A communication design approach is concerned with developing the message and aesthetics in media. It also creates new media channels to ensure the message reaches the target audience. Due to overlapping skills, some designers use graphic design and communication design interchangeably.
Communication design can also refer to a systems-based approach, in which the totality of media and messages within a culture or organization are designed as a single integrated process rather than a series of discrete efforts. This is done through communication channels that aim to inform and attract the attention of the target audience. Design skills must be used to create content suitable for different cultures and to maintain a pleasurable visual design. These are crucial pieces of a successful media communications kit.
Within the Communication discipline, the emerging framework for Communication as Design focuses on redesigning interactivity and shaping communication affordances. Software and applications create opportunities for and place constraints on communication. Recently, Guth and Brabham examined the way that ideas compete within a crowdsourcing platform, providing a model for the relationships among design ideas, communication, and platform. The same authors have interviewed technology company founders about the democratic ideals they build into the design of e-government applications and technologies. Interest in the Communication as Design framework continues growing among researchers.
Overview
Communication design seeks to attract, inspire, and motivate people to respond to messages and to make favorable impact. This impact oriented toward the objectives of the commissioning body, which can be either to build a brand or move sales. It can also range from changing behaviors, to promoting a message, to disseminating information. The process of communication design involves strategic business thinking, including using market research, creativity, problem-solving, and technical skills and knowledge such as colour theory, page layout, typography, and creating visual hierarchies. Communication designers translate ideas and information through a variety of media. In order to establish credibility and influence audiences through the communication, communication designers use both traditional tangible skills and the ability to think strategically in design and marketing terms.
The term communication design is often used interchangeably with visual communication, but it maintains a broader meaning that includes auditory, vocal, touch, and olfactory senses. Examples of communication design practices include information architecture, editing, typography, illustration, web design, animation, advertising, ambient media, visual identity design, performing arts, copywriting and professional writing skills applied in the creative industries.
Education
Students of communication design learn how to create visual messages and broadcast them to the world in new and meaningful ways. In the complex digital environment around us, communication design has become a powerful means of reaching out to the target audiences. Therefore, it expands its focus beyond user-experiences to user-networks. Students learn how to combine communication with art and technology. The communication design discipline involves teaching how to design web pages, video games, animation, motion graphics, and more.
Communication Design has content as its main purpose. It must achieve a reaction, or get a customer to see a product in a genuine way to attract sales or effectively communicate a message. Communication design students are often Illustrators, Graphic Designers, Web designers, Advertising artists, Animators, Video Editors, Motion graphic artists, Printmakers, and Conceptual Artists. The term communications design is fairly general considering its interdisciplinary practitioners operate within various mediums to get a message across.
Subdisciplines
Advertising
Art direction
Brand management
Content strategy
Copywriting
Creative direction
Graphic design
Illustration
Industrial design
Information architecture
Information graphics
Instructional design
Marketing communications
Performing arts
Presentation
Technical writing
Visual arts
Visual communication design
Visual communication design is the design working in any media or support of visual communication. This is considered by some to be more accurate alternative terminology to cover all types of design applied in communication. It uses a visual channel for message transmission, reflecting the visual language inherent to some media. Unlike the terms graphic design (graphics) or interface design (electronic media), it is not limited to support a particular form of content.
Print media design
Print media design is a graphic design discipline that creates designs for printed media. Print design involves the creation of flyers, brochures, book covers, t-shirt prints, business cards, booklets, bookmarks, envelope designs, signs, letterheads, posters, CD cover, print media design templates, and more. The goal of print design is to use visual graphics to communicate a specific message to viewers.
See also
Design elements
Design principles
Communication studies
Swiss Style (design)
Footnotes
External links
Dossier Communication Design in Germany of the Goethe-Institut
Design
Advertising campaigns
Writing
Packaging
Communication studies
de:Kommunikationsdesign | 0.76911 | 0.989306 | 0.760885 |
Lesson plan | A lesson plan is a teacher's detailed description of the course of instruction or "learning trajectory" for a lesson. A daily lesson plan is developed by a teacher to guide class learning. Details will vary depending on the preference of the teacher, subject being covered, and the needs of the students. There may be requirements mandated by the school system regarding the plan. A lesson plan is the teacher's guide for running a particular lesson, and it includes the goal (what the students are supposed to learn), how the goal will be reached (the method, procedure) and a way of measuring how well the goal was reached (test, worksheet, homework etc.).
Main classes of symbiotic relationships
While there are many formats for a lesson plan, most lesson plans contain some or all of these elements, typically in this order:
Title of the lesson
Time required to complete the lesson
List of required materials
List of objectives, which may be behavioral objectives (what the student can do at lesson completion) or knowledge objectives (what the student knows at lesson completion)
The set (or lead-in, or bridge-in) that focuses students on the lesson's skills or concepts—these include showing pictures or models, asking leading questions, or reviewing previous lessons
An instructional component that describes the sequence of events that make up the lesson, including the teacher's instructional input and, where appropriate, guided practice by students to consolidate new skills and ideas
Independent practice that allows students to extend skills or knowledge on their own
A summary, where the teacher wraps up the discussion and answers questions
An evaluation component, a test for mastery of the instructed skills or concepts—such as a set of questions to answer or a set of instructions to follow
A risk assessment where the lesson's risks and the steps taken to minimize them are documented
An analysis component the teacher uses to reflect on the lesson itself—such as what worked and what needs improving
A continuity component reviews and reflects on content from the previous lesson
Herbartian approach: Fredrick Herbart (1776-1841)
According to Herbart, there are eight lesson plan phases that are designed to provide "many opportunities for teachers to recognize and correct students' misconceptions while extending understanding for future lessons." These phases are: Introduction, Foundation, Brain Activation, Body of New Information, Clarification, Practice and Review, Independent Practice, and Closure.
Preparation/Instruction: It pertains to preparing and motivating children to the lesson content by linking it to the previous knowledge of the student, by arousing curiosity of the children and by making an appeal to their senses. This prepares the child's mind to receive new knowledge. "To know where the pupils are and where they should try to be are the two essentials of good teaching." Lessons may be started in the following manner: a. Two or three interesting but relevant questions b. Showing a picture/s, a chart or a model c. A situation Statement of Aim: Announcement of the focus of the lesson in a clear, concise statement such as "Today, we shall study the..."
Presentation/Development: The actual lesson commences here. This step should involve a good deal of activity on the part of the students. The teacher will take the aid of various devices, e.g., questions, illustrations, explanation, expositions, demonstration and sensory aids, etc. Information and knowledge can be given, explained, revealed or suggested. The following principles should be kept in mind. a. Principle of selection and division: This subject matter should be divided into different sections. The teacher should also decide as to how much he is to tell and how much the pupils are to find out for themselves. b. Principle of successive sequence: The teacher should ensure that the succeeding as well as preceding knowledge is clear to the students. c. Principle of absorption and integration: In the end separation of the parts must be followed by their combination to promote understanding of the whole.
Association comparison: It is always desirable that new ideas or knowledge be associated to daily life situations by citing suitable examples and by drawing comparisons with the related concepts. This step is important when we are establishing principles or generalizing definitions.
Generalizing: This concept is concerned with the systematizing of the knowledge learned. Comparison and contrast lead to generalization. An effort should be made to ensure that students draw the conclusions themselves. It should result in students' own thinking, reflection and experience.
Application: It requires a good deal of mental activity to think and apply the principles learned to new situations. Knowledge, when it is put to use and verified, becomes clear and a part of the student's mental make-up.
Recapitulation: Last step of the lesson plan, the teacher tries to ascertain whether the students have understood or grasped the subject matter or not. This is used for assessing/evaluating the effectiveness of the lesson by asking students questions on the contents of the lesson or by giving short objectives to test the student's level of understanding; for example, to label different parts on a diagram, etc.
Lesson plans and unit plans
A well-developed lesson plan reflects the interests and needs of students. It incorporates best practices for the educational field. The lesson plan correlates with the teacher's philosophy of education, which is what the teacher feels is the purpose of educating the students.
Secondary English program lesson plans, for example, usually center around four topics. They are literary theme, elements of language and composition, literary history, and literary genre. A broad, thematic lesson plan is preferable, because it allows a teacher to create various research, writing, speaking, and reading assignments. It helps an instructor teach different literature genres and incorporate videotapes, films, and television programs. Also, it facilitates teaching literature and English together. Similarly, history lesson plans focus on content (historical accuracy and background information), analytic thinking, scaffolding, and the practicality of lesson structure and meeting of educational goals. School requirements and a teacher's personal tastes, in that order, determine the exact requirements for a lesson plan.
Unit plans follow much the same format as a lesson plan, but cover an entire unit of work, which may span several days or weeks. Modern constructivist teaching styles may not require individual lesson plans. The unit plan may include specific objectives and timelines, but lesson plans can be more fluid as they adapt to student needs and learning styles.
Unit Planning is the proper selection of learning activities which presents a complete picture. Unit planning is a systematic arrangement of subject matter. "A unit plan is one which involves a series of learning experiences that are linked to achieve the aims composed by methodology and contents," (Samford). "A unit is an organization of various activities, experiences and types of learning around a central problem or purpose developed cooperatively by a group of pupils under a teacher leadership involving planning, execution of plans and evaluation of results," (Dictionary of Education).
Criteria of a Unit Plan
Needs, capabilities, interest of the learner should be considered.
Prepared on the sound psychological knowledge of the learner.
Provide a new learning experience; systematic but flexible.
Sustain the attention of the learner til the end.
Related to social and physical environment of the learner.
Development of learner's personality.
Lesson planning is a thinking process, not the filling in of a lesson plan template. A lesson plan is envisaged as a blue print, guide map for action, a comprehensive chart of classroom teaching-learning activities, an elastic but systematic approach for the teaching of concepts, skills and attitudes.
The first thing for setting a lesson plan is to create an objective, that is, a statement of purpose for the whole lesson. An objective statement itself should answer what students will be able to do by the end of the lesson. The objective drives the whole lesson plan; it is the reason the lesson plan exists. The teacher should ensure that lesson plan goals are compatible with the developmental level of the students. The teacher ensures as well that their student achievement expectations are reasonable.
Delivery of lesson plans
The following guidelines were set by Canadian Council on Learning to enhance the effectiveness of the teaching process:
At the start of teaching, provide the students with an overall picture of the material to be presented. When presenting material, use as many visual aids as possible and a variety of familiar examples. Organize the material so that it is presented in a logical manner and in meaningful units. Try to use terms and concepts that are already familiar to the students.
Maximize the similarity between the learning situation and the assessment situation and provide adequate training practice. Give students the chance to use their new skills immediately on their return home through assignments. Communicate the message about the importance of the lesson, increase their motivation level, and control sidelining behaviors by planning rewards for students who successfully complete and integrate the new content. To sustain learning performance, the assessments must be fair and attainable.
Motivation affects teaching outcomes independently of any increase in cognitive ability. Learning motivation is affected by individual characteristics like conscientiousness and by the learning climate. Therefore, it is important to try to provide as much realistic assignments as possible. Students learn best at their own pace and when correct responses are immediately reinforced, perhaps with a quick “Well done.” For many Generation Z students, the use of technology can motivate learning. Simulations, games, virtual worlds, and online networking are already revolutionizing how students learn and how learning experiences are designed and delivered. Learners who are immersed in deep experiential learning in highly visual and interactive environments become intellectually engaged in the experience.
Research shows that it is important to create a perceived need for learning (Why should I learn, the realistic relatable objective) in the minds of students. Then only students can perceive the transferred "how and what to learn" part from the educator. Also, provide ample information that will help to set the students' expectations about the events and consequences of actions that are likely to occur in the learning environment. For example, students learning to become adept on differential equations may face stressful situations, high loads of study, and a difficult environment. Studies suggest that the negative impact of such conditions can be reduced by letting students know ahead of time what might occur and equipping them with skills to manage.
Lesson plans and classroom management
Creating a reliable lesson plan is an important part of classroom management. Doing so requires the ability to incorporate effective strategies into the classroom, the students and overall environment. There are many different types of lesson plans and ways of creating them. Teachers can encourage critical thinking in a group setting by creating plans that include the students participating collectively. Visual strategies are another component tied into lesson plans that help with classroom management. These visual strategies help a wide variety of students to increase their learning structure and possibly their overall comprehension of the material or what is in the lesson plan itself. These strategies also give students with disabilities the option to learn in a possible more efficient way. Teachers need to realize the wide range of strategies that can be used to maintain classroom management and students. They should find the best strategies to incorporate in their lesson planning for their specific grade, student type, teaching style, etc. and utilize them to their advantage. The classroom tends to flow better when the teacher has a proper lesson planned, as it provides structure for the students. Being able to utilize class time efficiently comes with creating lesson plans at their core.
Assignments
Assignments are either in-class or take-home tasks to be completed for the next class period. These tasks are important because they help ensure that the instruction provides the students with a goal, the power to get there, and the interest to be engaged in rigorous academic contexts as they acquire content and skills necessary to be able to participate in academic coursework.
Experts cite that, in order to be effective and achieve objectives, the development of these assignment tasks must take into consideration the perceptions of the students because they are different from those of the teacher's. This challenge can be addressed by providing examples instead of abstract concepts or instructions. Another strategy involves the development of tasks that are specifically related to the learners' needs, interests, and age ranges. There are also experts who cite the importance of teaching learners about assignment planning. This is said to facilitate the students' engagement and interest in their assignment. Some strategies include brainstorming about the assignment process and the creation of a learning environment wherein students feel engaged and willing to reflect on their prior learning and to discuss specific or new topics.
There are several assignment types so the instructor must decide whether class assignments are whole-class, small groups, workshops, independent work, peer learning, or contractual:
Whole-class—the teacher lectures to the class as a whole and has the class collectively participate in classroom discussions.
Small groups—students work on assignments in groups of three or four.
Workshops—students perform various tasks simultaneously. Workshop activities must be tailored to the lesson plan.
Independent work—students complete assignments individually.
Peer learning—students work together, face to face, so they can learn from one another.
Contractual work—teacher and student establish an agreement that the student must perform a certain amount of work by a deadline.
These assignment categories (e.g. peer learning, independent, small groups) can also be used to guide the instructor's choice of assessment measures that can provide information about student and class comprehension of the material. As discussed by Biggs (1999), there are additional questions an instructor can consider when choosing which type of assignment would provide the most benefit to students. These include:
What level of learning do the students need to attain before choosing assignments with varying difficulty levels?
What is the amount of time the instructor wants the students to use to complete the assignment?
How much time and effort does the instructor have to provide student grading and feedback?
What is the purpose of the assignment? (e.g. to track student learning; to provide students with time to practice concepts; to practice incidental skills such as group process or independent research)
How does the assignment fit with the rest of the lesson plan? Does the assignment test content knowledge or does it require application in a new context?
Does the lesson plan fit a particular framework?
See also
Curriculum
Syllabus
Bloom's Taxonomy
Instructional Materials
No Child Left Behind
References
Further reading
Ahrenfelt, Johannes, and Neal Watkin. 100 Ideas for Essential Teaching Skills (Continuum One Hundred). New York: Continuum, 2006.
Serdyukov, Peter, and Ryan, Mark. Writing Effective Lesson Plans: The 5-Star Approach. Boston: Allyn & Bacon, 2008.
Salsbury, Denise E., and Melinda Schoenfeldt. Lesson Planning: A Research-Based Model for K-12 Classrooms. Alexandria, VA: Prentice Hall, 2008.
Skowron, Janice. Powerful Lesson Planning: Every Teachers Guide to Effective Instruction. Thousand Oaks, CA: Corwin Press, 2006.
Thompson, Julia G. First Year Teacher's Survival Guide: Ready-To-Use Strategies, Tools & Activities For Meeting The Challenges Of Each School Day (J-B Ed:Survival Guides). San Francisco: Jossey-Bass, 2007.
Tileston, Donna E. Walker. What Every Teacher Should Know About Instructional Planning Thousand Oaks, CA: Corwin Press, 2003.
Wolfe, Shoshana. Your Best Year Yet! A Guide to Purposeful Planning and Effective Classroom Organization (Teaching Strategies). New York: Teaching Strategies, 2006.
School pedagogy
Teaching | 0.764206 | 0.995627 | 0.760864 |
Dynamic stochastic general equilibrium | Dynamic stochastic general equilibrium modeling (abbreviated as DSGE, or DGE, or sometimes SDGE) is a macroeconomic method which is often employed by monetary and fiscal authorities for policy analysis, explaining historical time-series data, as well as future forecasting purposes. DSGE econometric modelling applies general equilibrium theory and microeconomic principles in a tractable manner to postulate economic phenomena, such as economic growth and business cycles, as well as policy effects and market shocks.
Terminology
As a practical matter, people often use the term "DSGE models" to refer to a particular class of classically quantitative econometric models of business cycles or economic growth called real business cycle (RBC) models. DSGE models were initially proposed by Kydland & Prescott, and Long & Plosser; Charles Plosser described RBC models as a precursor for DSGE modeling.
As mentioned in the Introduction, DSGE models are the predominant framework of macroeconomic analysis. They are multifaceted, and their combination of micro-foundations and optimising economic behaviour of rational agents allows for a comprehensive analysis of macro effects. As indicated by their name, their defining characteristics are as follows:
Dynamic: The effect of current choices on future uncertainty makes the models dynamic and assigns a certain relevance to the expectations of agents in forming macroeconomic outcomes.
Stochastic: The models take into consideration the transmission of random shocks into the economy and the consequent economic fluctuations.
General: referring to the entire economy as a whole (within the model) in that price levels and output levels are determined jointly. This is opposed to a partial equilibrium, where price levels are taken as given and only output levels are determined within the model economy.
Equilibrium: In accordance with Léon Walras's General Competitive Equilibrium Theory, the model captures the interaction between policy actions and behaviour of agents.
RBC modeling
The formulation and analysis of monetary policy has undergone significant evolution in recent decades and the development of DSGE models has played a key role in this process. As was aforementioned DSGE models are seen to be an update of RBC (real business cycle) models.
Early real business-cycle models postulated an economy populated by a representative consumer who operates in perfectly competitive markets. The only sources of uncertainty in these models are "shocks" in technology. RBC theory builds on the neoclassical growth model, under the assumption of flexible prices, to study how real shocks to the economy might cause business cycle fluctuations.
The "representative consumer" assumption can either be taken literally or reflect a Gorman aggregation of heterogenous consumers who are facing idiosyncratic income shocks and complete markets in all assets. These models took the position that fluctuations in aggregate economic activity are actually an "efficient response" of the economy to exogenous shocks.
The models were criticized on a number of issues:
Microeconomic data cast doubt on some of the key assumptions of the model, such as: perfect credit- and insurance-markets; perfectly friction-less labour markets; etc.
They had difficulty in accounting for some key properties of the aggregate data, such as: the observed volatility in hours worked; the equity premium; etc.
Open-economy versions of these models failed to account for observations such as: the cyclical movement of consumption and output across countries; the extremely high correlation between nominal and real exchange rates; etc.
They are mute on many policy related issues of importance to macroeconomists and policy makers, such as the consequences of different monetary policy rules for aggregate economic activity.
The Lucas critique
In a 1976 paper, Robert Lucas argued that it is naive to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data, especially highly aggregated historical data. Lucas claimed that the decision rules of Keynesian models, such as the fiscal multiplier, cannot be considered as structural, in the sense that they cannot be invariant with respect to changes in government policy variables, stating:
Given that the structure of an econometric model consists of optimal decision-rules of economic agents, and that optimal decision-rules vary systematically with changes in the structure of series relevant to the decision maker, it follows that any change in policy will systematically alter the structure of econometric models.
This meant that, because the parameters of the models were not structural, i.e. not indifferent to policy, they would necessarily change whenever policy was changed. The so-called Lucas critique followed similar criticism undertaken earlier by Ragnar Frisch, in his critique of Jan Tinbergen's 1939 book Statistical Testing of Business-Cycle Theories, where Frisch accused Tinbergen of not having discovered autonomous relations, but "coflux" relations, and by Jacob Marschak, in his 1953 contribution to the Cowles Commission Monograph, where he submitted that
In predicting the effect of its decisions (policies), the government...has to take account of exogenous variables, whether controlled by it (the decisions themselves, if they are exogenous variables) or uncontrolled (e.g. weather), and of structural changes, whether controlled by it (the decisions themselves, if they change the structure) or uncontrolled (e.g. sudden changes in people's attitude).
The Lucas critique is representative of the paradigm shift that occurred in macroeconomic theory in the 1970s towards attempts at establishing micro-foundations.
Response to the Lucas critique
In the 1980s, macro models emerged that attempted to directly respond to Lucas through the use of rational expectations econometrics.
In 1982, Finn E. Kydland and Edward C. Prescott created a real business cycle (RBC) model to "predict the consequence of a particular policy rule upon the operating characteristics of the economy." The stated, exogenous, stochastic components in their model are "shocks to technology" and "imperfect indicators of productivity." The shocks involve random fluctuations in the productivity level, which shift up or down the trend of economic growth. Examples of such shocks include innovations, the weather, sudden and significant price increases in imported energy sources, stricter environmental regulations, etc. The shocks directly change the effectiveness of capital and labour, which, in turn, affects the decisions of workers and firms, who then alter what they buy and produce. This eventually affects output.
The authors stated that, since fluctuations in employment are central to the business cycle, the "stand-in consumer [of the model] values not only consumption but also leisure," meaning that unemployment movements essentially reflect the changes in the number of people who want to work. "Household-production theory," as well as "cross-sectional evidence" ostensibly support a "non-time-separable utility function that admits greater inter-temporal substitution of leisure, something which is needed," according to the authors, "to explain aggregate movements in employment in an equilibrium model." For the K&P model, monetary policy is irrelevant for economic fluctuations.
The associated policy implications were clear: There is no need for any form of government intervention since, ostensibly, government policies aimed at stabilizing the business cycle are welfare-reducing. Since microfoundations are based on the preferences of decision-makers in the model, DSGE models feature a natural benchmark for evaluating the welfare effects of policy changes. Furthermore, the integration of such microfoundations in DSGE modeling enables the model to accurately adjust to shifts in fundamental behaviour of agents and is thus regarded as an "impressive response" to the Lucas critique. The Kydland/Prescott 1982 paper is often considered the starting point of RBC theory and of DSGE modeling in general and its authors were awarded the 2004 Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel.
DSGE modeling
Structure
By applying dynamic principles, dynamic stochastic general equilibrium models contrast with the static models studied in applied general equilibrium models and some computable general equilibrium models.
DSGE models employed by governments and central banks for policy analysis are relatively simple. Their structure is built around three interrelated sections including that of demand, supply, and the monetary policy equation. These three sections are formally defined by micro-foundations and make explicit assumptions about the behavior of the main economic agents in the economy, i.e. households, firms, and the government. The interaction of the agents in markets cover every period of the business cycle which ultimately qualifies the "general equilibrium" aspect of this model. The preferences (objectives) of the agents in the economy must be specified. For example, households might be assumed to maximize a utility function over consumption and labor effort. Firms might be assumed to maximize profits and to have a production function, specifying the amount of goods produced, depending on the amount of labor, capital and other inputs they employ. Technological constraints on firms' decisions might include costs of adjusting their capital stocks, their employment relations, or the prices of their products.
Below is an example of the set of assumptions a DSGE is built upon:
Perfect competition in all markets
All prices adjust instantaneously
Rational expectations
No asymmetric information
The competitive equilibrium is Pareto optimal
Firms are identical and price takers
Infinitely lived identical price-taking households
to which the following frictions are added:
Distortionary taxes (Labour taxes) – to account for not lump-sum taxation
Habit persistence (the period utility function depends on a quasi-difference of consumption)
Adjustment costs on investments – to make investments less volatile
Labour adjustment costs – to account for costs firms face when changing the level of employment
The models' general equilibrium nature is presumed to capture the interaction between policy actions and agents' behavior, while the models specify assumptions about the stochastic shocks that give rise to economic fluctuations. Hence, the models are presumed to "trace more clearly the shocks' transmission to the economy." This is exemplified in the below explanation of a simplified DSGE model.
Demand defines real activity as a function of the nominal interest rate minus expected inflation, and of expectations regarding future real activity.
The demand block confirms the general economic principle that temporarily high interest rates encourage people and firms to save instead of consuming/investing; as well as suggesting the likelihood of increased current spending under the expectation of promising future prospects, regardless of rate level.
Supply is dependent on demand through the input of the level of activity, which impacts the determination of inflation.
E.g. In times of high activity, firms are required increase the wage rate in order to encourage employees to work greater hours which leads to a general increase in marginal costs and thus a subsequent increase in future expectation and current inflation.
The demand and supply sections simultaneously contribute to a determination of monetary policy. The formal equation specified in this section describes the conditions under which the central bank determines the nominal interest rate.
As such, general central bank behaviour is reflected through this i.e. raising the bank rate (short-term interest rates) in periods of rapid or unsustainable growth and vice versa.
There is a final flow from monetary policy towards demand representing the impact of adjustments in nominal interest rates on real activity and subsequently inflation.
As such a complete simplified model of the relationship between three key features is defined. This dynamic interaction between the endogenous variables of output, inflation, and the nominal interest rate, is fundamental in DSGE modelling.
Schools
Two schools of analysis form the bulk of DSGE modeling: the classic RBC models, and the New-Keynesian DSGE models that build on a structure similar to RBC models, but instead assume that prices are set by monopolistically competitive firms, and cannot be instantaneously and costlessly adjusted. Rotemberg & Woodford introduced this framework in 1997. Introductory and advanced textbook presentations of DSGE modeling are given by Galí (2008) and Woodford (2003). Monetary policy implications are surveyed by Clarida, Galí, and Gertler (1999).
The European Central Bank (ECB) has developed a DSGE model, called the Smets–Wouters model, which it uses to analyze the economy of the Eurozone as a whole. The Bank's analysts state that
developments in the construction, simulation and estimation of DSGE models have made it possible to combine a rigorous microeconomic derivation of the behavioural equations of macro models with an empirically plausible calibration or estimation which fits the main features of the macroeconomic time series.
The main difference between "empirical" DSGE models and the "more traditional macroeconometric models, such as the Area-Wide Model", according to the ECB, is that "both the parameters and the shocks to the structural equations are related to deeper structural parameters describing household preferences and technological and institutional constraints."
The Smets-Wouters model uses seven Eurozone area macroeconomic series: real GDP; consumption; investment; employment; real wages; inflation; and the nominal, short-term interest rate. Using Bayesian estimation and validation techniques, the bank's modeling is ostensibly able to compete with "more standard, unrestricted time series models, such as vector autoregression, in out-of-sample forecasting."
Criticism
Bank of Lithuania Deputy Chairman Raimondas Kuodis disputes the very title of DSGE analysis: The models, he claims, are neither dynamic (since they contain no evolution of stocks of financial assets and liabilities), stochastic (because we live in the world of Knightian uncertainty and, since future outcomes or possible choices are unknown, then risk analysis or expected utility theory are not very helpful), general (they lack a full accounting framework, a stock-flow consistent framework, which would significantly reduce the number of degrees of freedom in the economy), or even about equilibrium (since markets clear only in a few quarters).
Willem Buiter, Citigroup Chief Economist, has argued that DSGE models rely excessively on an assumption of complete markets, and are unable to describe the highly nonlinear dynamics of economic fluctuations, making training in 'state-of-the-art' macroeconomic modeling "a privately and socially costly waste of time and resources". Narayana Kocherlakota, President of the Federal Reserve Bank of Minneapolis, wrote that
many modern macro models...do not capture an intermediate messy reality in which market participants can trade multiple assets in a wide array of somewhat segmented markets. As a consequence, the models do not reveal much about the benefits of the massive amount of daily or quarterly re-allocations of wealth within financial markets. The models also say nothing about the relevant costs and benefits of resulting fluctuations in financial structure (across bank loans, corporate debt, and equity).
N. Gregory Mankiw, regarded as one of the founders of New Keynesian DSGE modeling, has argued that
New classical and New Keynesian research has had little impact on practical macroeconomists who are charged with [...] policy. [...] From the standpoint of macroeconomic engineering, the work of the past several decades looks like an unfortunate wrong turn.
In the 2010 United States Congress hearings on macroeconomic modeling methods, held on 20 July 2010, and aiming to investigate why macroeconomists failed to foresee the financial crisis of 2007-2010, MIT professor of Economics Robert Solow criticized the DSGE models currently in use:
I do not think that the currently popular DSGE models pass the smell test. They take it for granted that the whole economy can be thought about as if it were a single, consistent person or dynasty carrying out a rationally designed, long-term plan, occasionally disturbed by unexpected shocks, but adapting to them in a rational, consistent way... The protagonists of this idea make a claim to respectability by asserting that it is founded on what we know about microeconomic behavior, but I think that this claim is generally phony. The advocates no doubt believe what they say, but they seem to have stopped sniffing or to have lost their sense of smell altogether.
Commenting on the Congressional session, The Economist asked whether agent-based models might better predict financial crises than DSGE models.
Former Chief Economist and Senior Vice President of the World Bank Paul Romer has criticized the "mathiness" of DSGE models and dismisses the inclusion of "imaginary shocks" in DSGE models that ignore "actions that people take." Romer submits a simplified presentation of real business cycle (RBC) modelling, which, as he states, essentially involves two mathematical expressions: The well known formula of the quantity theory of money, and an identity that defines the growth accounting residual as the difference between growth of output and growth of an index of inputs in production.
Romer assigned to residual the label "phlogiston" while he criticized the lack of consideration given to monetary policy in DSGE analysis.
Joseph Stiglitz finds "staggering" shortcomings in the "fantasy world" the models create and argues that "the failure [of macroeconomics] were the wrong microfoundations, which failed to incorporate key aspects of economic behavior". He suggested the models have failed to incorporate "insights from information economics and behavioral economics" and are "ill-suited for predicting or responding to a financial crisis." Oxford University's John Muellbauer put it this way: "It is as if the information economics revolution, for which George Akerlof, Michael Spence and Joe Stiglitz shared the Nobel Prize in 2001, had not occurred. The combination of assumptions, when coupled with the trivialisation of risk and uncertainty...render money, credit and asset prices largely irrelevant... [The models] typically ignore inconvenient truths." Nobel laureate Paul Krugman asked, "Were there any interesting predictions from DSGE models that were validated by events? If there were, I'm not aware of it."
Austrian economists reject DSGE modelling. Critique of DSGE-style macromodeling is at the core of Austrian theory, where, as opposed to RBC and New Keynesian models where capital is homogeneous capital is heterogeneous and multi-specific and, therefore, production functions for the multi-specific capital are simply discovered over time. Lawrence H. White concludes that present-day mainstream macroeconomics is dominated by Walrasian DSGE models, with restrictions added to generate Keynesian properties:
Mises consistently attributed the boom-initiating shock to unexpectedly expansive policy by a central bank trying to lower the market interest rate. Hayek added two alternate scenarios. [One is where] fresh producer-optimism about investment raises the demand for loanable funds, and thus raises the natural rate of interest, but the central bank deliberately prevents the market rate from rising by expanding credit. [Another is where,] in response to the same kind of increase the demand for loanable funds, but without central bank impetus, the commercial banking system by itself expands credit more than is sustainable.
Hayek had criticized Wicksell for the confusion of thinking that establishing a rate of interest consistent with intertemporal equilibrium also implies a constant price level. Hayek posited that intertemporal equilibrium requires not a natural rate but the "neutrality of money," in the sense that money does not "distort" (influence) relative prices.
Post-Keynesians reject the notions of macro-modelling typified by DSGE. They consider such attempts as "a chimera of authority," pointing to the 2003 statement by Lucas, the pioneer of modern DSGE modelling:
Macroeconomics in [its] original sense [of preventing the recurrence of economic disasters] has succeeded. Its central problem of depression prevention has been solved, for all practical purposes, and has in fact been solved for many decades.
A basic Post Keynesian presumption, which Modern Monetary Theory proponents share, and which is central to Keynesian analysis, is that the future is unknowable and so, at best, we can make guesses about it that would be based broadly on habit, custom, gut-feeling, etc. In DSGE modeling, the central equation for consumption supposedly provides a way in which the consumer links decisions to consume now with decisions to consume later and thus achieves maximum utility in each period. Our marginal Utility from consumption today must equal our marginal utility from consumption in the future, with a weighting parameter that refers to the valuation that we place on the future relative to today. And since the consumer is supposed to always the equation for consumption, this means that all of us do it individually, if this approach is to reflect the DSGE microfoundational notions of consumption. However, post-Keynesians state that: no consumer is the same with another in terms of random shocks and uncertainty of income (since some consumers will spend every cent of any extra income they receive while others, typically higher-income earners, spend comparatively little of any extra income); no consumer is the same with another in terms of access to credit; not every consumer really considers what they will be doing at the end of their life in any coherent way, so there is no concept of a "permanent lifetime income", which is central to DSGE models; and, therefore, trying to "aggregate" all these differences into one, single "representative agent" is impossible. These assumptions are similar to the assumptions made in the so-called Ricardian equivalence, whereby consumers are assumed to be forward looking and to internalize the government's budget constraints when making consumption decisions, and therefore taking decisions on the basis of practically perfect evaluations of available information.
Extrinsic unpredictability, post-Keynesians state, has "dramatic consequences" for the standard, macroeconomic, forecasting, DSGE models used by governments and other institutions around the world. The mathematical basis of every DSGE model fails when distributions shift, since general-equilibrium theories rely heavily on ceteris paribus assumptions. They point to the Bank of England's explicit admission that none of the models they used and evaluated coped well during the 2007–2008 financial crisis, which, for the Bank, "underscores the role that large structural breaks can have in contributing to forecast failure, even if they turn out to be temporary."
Christian Mueller points out that the fact that DSGE models evolve (see next section) constitutes a contradiction of the modelling approach in its own right and, ultimately, makes DSGE models subject to the Lucas critique. This contradiction arises because the economic agents in the DSGE models fail to account for the fact that the very models on the basis of which they form expectations evolve due to progress in economic research. While the evolution of DSGE models as such is predictable the direction of this evolution is not. In effect, Lucas' notion of the systematic instability of economic models carries over to DSGE models proving that they are not solving one of the key problems they are thought to be overcoming.
Evolution of viewpoints
Federal Reserve Bank of Minneapolis president Narayana Kocherlakota acknowledges that DSGE models were "not very useful" for analyzing the financial crisis of 2007-2010 but argues that the applicability of these models is "improving," and claims that there is growing consensus among macroeconomists that DSGE models need to incorporate both "price stickiness and financial market frictions." Despite his criticism of DSGE modelling, he states that modern models are useful:
In the early 2000s, ...[the] problem of fit disappeared for modern macro models with sticky prices. Using novel Bayesian estimation methods, Frank Smets and Raf Wouters demonstrated that a sufficiently rich New Keynesian model could fit European data well. Their finding, along with similar work by other economists, has led to widespread adoption of New Keynesian models for policy analysis and forecasting by central banks around the world.
Still, Kocherlakota observes that in "terms of fiscal policy (especially short-term fiscal policy), modern macro-modeling seems to have had little impact. ... [M]ost, if not all, of the motivation for the fiscal stimulus was based largely on the long-discarded models of the 1960s and 1970s.
In 2010, Rochelle M. Edge, of the Federal Reserve System Board of Directors, contested that the work of Smets & Wouters has "led DSGE models to be taken more seriously by central bankers around the world" so that "DSGE models are now quite prominent tools for macroeconomic analysis at many policy institutions, with forecasting being one of the key areas where these models are used, in conjunction with other forecasting methods."
University of Minnesota professor of economics V.V. Chari has pointed out that state-of-the-art DSGE models are more sophisticated than their critics suppose:
The models have all kinds of heterogeneity in behavior and decisions... people's objectives differ, they differ by age, by information, by the history of their past experiences.
Chari also argued that current DSGE models frequently incorporate frictional unemployment, financial market imperfections, and sticky prices and wages, and therefore imply that the macroeconomy behaves in a suboptimal way which monetary and fiscal policy may be able to improve. Columbia University's Michael Woodford concedes that policies considered by DSGE models might not be Pareto optimal and they may as well not satisfy some other social welfare criterion. Nonetheless, in replying to Mankiw, Woodford argues that the DSGE models commonly used by central banks today and strongly influencing policy makers like Ben Bernanke, do not provide an analysis so different from traditional Keynesian analysis:
It is true that the modeling efforts of many policy institutions can reasonably be seen as an evolutionary development within the macroeconomic modeling program of the postwar Keynesians; thus if one expected, with the early New Classicals, that adoption of the new tools would require building anew from the ground up, one might conclude that the new tools have not been put to use. But in fact they have been put to use, only not with such radical consequences as had once been expected.
See also
Footnotes
References
Sources
Further reading
Software
DYNARE, free software for handling economic models, including DSGE
IRIS, free, open-source toolbox for macroeconomic modeling and forecasting
External links
Society for Economic Dynamics - Website of the Society for Economic Dynamics, dedicated to advances in DSGE modeling.
DSGE-NET, an "international network for DSGE modeling, monetary and fiscal policy"
General equilibrium theory
New classical macroeconomics
New Keynesian economics | 0.768685 | 0.989769 | 0.76082 |
Transformational leadership | Transformational leadership is a theory of leadership where a leader works with teams or followers beyond their immediate self-interests to identify needed change, creating a vision to guide the change through influence, inspiration, and executing the change in tandem with committed members of a group; This change in self-interests elevates the follower's levels of maturity and ideals, as well as their concerns for the achievement. It is an integral part of the Full Range Leadership Model. Transformational leadership is when leader behaviors influence followers and inspire them to perform beyond their perceived capabilities. Transformational leadership inspires people to achieve unexpected or remarkable results. It gives workers autonomy over specific jobs, as well as the authority to make decisions once they have been trained. This induces a positive change in the followers attitudes and the organization as a whole. Transformational leaders typically perform four distinct behaviors, also known as the four Is. These behaviors are inspirational motivation, idealized influence, intellectual stimulation, and individualized consideration.
Transformational leadership serves to enhance the motivation, morale, and job performance of followers through a variety of mechanisms; these include connecting the follower's sense of identity and self to a project and to the collective identity of the organization; being a role model for followers in order to inspire them and to raise their interest in the project; challenging followers to take greater ownership for their work, and understanding the strengths and weaknesses of followers, which allows the leader to align followers with tasks that enhance their performance. It is also important to understand the qualities a transformational leadership can bring to a work organization. Transformational leadership enhances commitment, involvement, loyalty, and performance of followers. Followers exert extra effort to show support to the leader, emulate the leader to emotionally identify with him/her, maintain obedience without losing any sense of self esteem. Transformational leaders are strong in the abilities to adapt to different situations, share a collective consciousness, self-manage, and be inspirational while leading a group of employees. Transformational leadership can be practiced but is arguably the most efficient when it is authentic to that individual. These types of leaders focus on how decision making benefits their organization and the community rather than for personal gains. A transformational leader by all accounts is a good leader. They show sound values, good judgement, and great character.
Inspirational motivation is when the leader inspires their followers to achieve. This leader sets high and reasonable goals for their followers and their organization. They inspire commitment and they create a shared vision for their organization. Leaders that utilize inspirational motivation motivate followers extrinsically and intrinsically, and they are able to articulate their expectations clearly. Inspirational motivation is closely tied to productivity. Productivity leads directly to having a source of worth, and could be considered both inspirational and visionary, leading to a positive emotional impact on that leader's followers.
Idealized influence is when the leader acts as a strong role model for their organization and leads by example. These types of leaders consider the needs of their followers and prioritize their needs. They typically have loads of commitment and are very ethical. Followers of these leaders typically try to emulate their leader as they tend to identify with them easily. When subordinates try to emulate their leader, emotional attachments tend to form. Although controversial, Adolf Hitler would be an example of a leader that had profound emotional impact on his subordinates.
Intellectual stimulation is when the leader encourages their followers to think for themselves. These leaders are creative, innovative, and are very open to new ideas. They tend to be tolerant of their followers' mistakes, and even encourage them as they believe they promote growth and improvement within the organization. These leaders create learning opportunities for their followers and abandon obsolete practices.
Individualized consideration is when the leader establishes a strong relationship with their followers. These leaders act as a caring supportive resource for their followers and their organization. They mentor their followers and allocate their time to developing their followers potential. One of the ways in which leaders can develop their followers is by delegating specific tasks that will foster an individual's development.
Origins
The concept of transformational leadership was initially introduced by James V. Downton, the first to coin the term "transformational leadership", a concept further developed by leadership expert and presidential biographer James MacGregor Burns. According to Burns, transformational leadership can be seen when "leaders and followers make each other advance to a higher level of morality and motivation." Through the strength of their vision and personality, transformational leaders are able to inspire followers to change expectations, perceptions, and motivations to work towards common goals. Burns also described transformational leaders as those who can move followers up on Maslow's hierarchy, but also move them to go beyond their own interests. Unlike in the transactional approach, it is not based on a "give and take" relationship, but on the leader's personality, traits and ability to make a change through example, articulation of an energizing vision and challenging goals. Transforming leaders are idealized in the sense that they are a moral exemplar of working towards the benefit of the team, organization and/or community. Burns theorized that transforming and transactional leadership were mutually exclusive styles. Later, business researcher Bernard M. Bass expanded upon Burns' original ideas to develop what is today referred to as Bass’ Transformational Leadership Theory. According to Bass, transformational leadership can be defined based on the impact that it has on followers. Transformational leaders, Bass suggested, garner trust, respect, and admiration from their followers. Democracy was central to Burns’ conception of transformational leadership: voters selected their leaders and voted them out if they failed to deliver on their visions. However, this was overlooked by Bass and others who introduced the theory to the business domain.
Bernard M. Bass (1985), extended the work of Burns (1978) by explaining the psychological mechanisms that underlie transforming and transactional leadership. Bass introduced the term "transformational" in place of "transforming." Bass added to the initial concepts of Burns (1978) to help explain how transformational leadership could be measured, as well as how it impacts follower motivation and performance. The extent to which a leader is transformational, is measured first, in terms of his influence on the followers. The followers of such a leader feel trust, admiration, loyalty and respect for the leader and because of the qualities of the transformational leader are willing to work harder than originally expected. These outcomes occur because the transformational leader offers followers something more than just working for self-gain; they provide followers with an inspiring mission and vision and give them an identity. The leader transforms and motivates followers through their idealized influence, intellectual stimulation and individual consideration. In addition, this leader encourages followers to come up with new and unique ways to challenge the status quo and to alter the environment to support being successful. Finally, in contrast to Burns, Bass suggested that leadership can simultaneously display both transformational and transactional leadership.
In 1985, transformational leadership had become more defined and developed whereby leaders known to use this style possessed the following traits: idealized influences, productive commitment, and inspirational motivation. Transformational leadership made transactional leadership more effective.
Definitions
According to Bass, transformational leadership encompasses several different aspects, including:
Emphasizing intrinsic motivation and positive development of followers
Raising awareness of moral standards
Highlighting important priorities
Fostering higher moral maturity in followers
Creating an ethical climate (share values, high ethical standards)
Encouraging followers to look beyond self-interests to the common good
Promoting cooperation and harmony
Using authentic, consistent means
Using persuasive appeals based on reason
Providing individual coaching and mentoring for followers
Appealing to the ideals of followers
Allowing freedom of choice for followers
Transformational leaders are described to hold positive expectations for followers, believing that they can do their best. As a result, they inspire, empower, and stimulate followers to exceed normal levels of performance. Transformational leaders also focus on and care about followers and their personal needs and development. Transformational leaders fit well in leading and working with complex work groups and organizations, where beyond seeking an inspirational leader to help guide them through an uncertain environment, followers are also challenged and feel empowered; this nurtures them into becoming loyal, high performers.
There are 4 components to transformational leadership, sometimes referred to as the 4 I's:
Idealized Influence (II) – the leader serves as an ideal role model for followers; the leader "walks the talk," and is admired for this. A transformational leader embodies the qualities that he/she wants in his/her team. In this case, the followers see the leader as a model to emulate. For the followers, it is easy to believe and trust in a transformational leader. This is also referred to as charisma and showing a charismatic personality influences the followers to become more like their leader.
Inspirational Motivation (IM) – Transformational leaders have the ability to inspire and motivate followers through having a vision and presenting that vision. Combined, these first two I's are what constitute the transformational leader's productivity. A transformational leader manages to inspire the followers easily with clarity. The transformational leader convinces the followers with simple and easy-to-understand words, as well as with their own image.
Individualized Consideration (IC) – Transformational leaders demonstrate genuine concern for the needs and feelings of followers and help them self-actualize. This personal attention to each follower assists in developing trust among the organization's members and their authority figure(s). For example, the transformational leader can point out the problems of a member working in a group. From this perspective, the leader can work towards training and developing a follower who is having difficulties in a job. This is an important element because teams are able to rely on and work together, so decisions can be made more quickly, while the transformational leader increases their buy-in.
Intellectual Stimulation (IS) – the leader challenges followers to be innovative and creative, they encourage their followers to challenge the status quo. A common misunderstanding is that transformational leaders are "soft," but the truth is that they constantly challenge followers to higher levels of performance.
Transformational leaders do one thing transactional leaders don't, which is going beyond self-actualization. The importance of transcending self-interests is something lost sight of by those who see that the ultimate in maturity of development is self-actualization. Bass. (1999).
Leader Personalities
The appeal or, or preference to engage in, transformational leadership may be influenced by leaders' personalities. The assertive-directing personality type, as measured by the Strength Deployment Inventory shows a moderate positive correlation with transformational leadership at 0.438. While leaders with different types showed correlations with other leadership styles. The altruistic-nurturing type correlated with servant leadership, analytic-autonomizing leaders correlated with transactional leadership, and those with a flexible-cohering type correlated with situational leadership.
Five major personality traits have been identified as factors contributing to the likelihood of an individual displaying the characteristics of a transformational leader. Different emphasis on different elements of these traits point to inclination in personality to inspirational leadership, transactional leadership, and transformational leadership. These five traits are as follows.
Extraversion
The two main characteristics of extraverts are affiliation and agency, which relate to the social and leadership aspects of their personality, respectively. Extraversion is generally seen as an inspirational trait usually exhibited in transformational leadership.
Neuroticism
Neuroticism generally gives an individual an anxiety related to productivity which, in a group setting can be debilitating to a degree where they are unlikely to position themselves in a role of transformational leadership due to lower self-esteem and a tendency to shirk from leadership responsibilities. When neuroticism is reverse-scored, it reflects emotional stability, which would yield a positive correlation to transformational leadership.
Openness to experience
Creative expression and emotional responsiveness have been linked to a general tendency of openness to experience. This trait is also seen as a component of transformational leadership as it relates to the ability to give big-picture visionary leadership for an organization.
Agreeableness
Although not a trait which specifically points to transformational leadership, leaders in general possess an agreeable nature stemming from a natural concern for others and high levels of individual consideration. Productivity and idealized influence is a classic ability of individuals who possess agreeability.
Conscientiousness
Strong sense of direction and the ability to put large amounts of productive work into tasks is the by-product of conscientious leaders. This trait is more linked to a transactional form of leadership given the management-based abilities of such individuals and the detail oriented nature of their personality. Results suggest that transformational leaders might give greater importance to values pertaining to others than to values concerning only themselves.
Studies have shown that subordinates' and leaders' ratings of transformational leadership may not converge. According to leaders' self‐ratings, the extraverted, intuitive and perceiving preferences favour transformational leadership. On the contrary, subordinates' ratings indicated that leaders with sensing preference are associated with transformational leadership.
Measurement
One of the ways in which transformational leadership is measured is through use of the Multifactor Leadership Questionnaire (MLQ), a survey which identifies different leadership characteristics based on examples and provides a basis for leadership training. Early development was limited because the knowledge in this area was primitive, and as such, finding good examples for the items in the questionnaire was difficult. Subsequent development on the MLQ led to the current version of the survey, the MLQ5X.
The current version of the MLQ5X includes 36 items that are broken down into 9 scales with 4 items measuring each scale. Subsequent validation work by John Antonakis and his colleagues provided strong evidence supporting the validity and reliability of the MLQ5X. Indeed, Antonakis went on to confirm the viability of the proposed nine-factor MLQ model, using two very large samples. Although other researchers have still been critical of the MLQ model, since 2003 no one has been able to provide dis-confirming evidence of the theorized nine-factor model with such large sample sizes as those published by Antonakis.
In regards to transformational leadership, the first 5 components – Idealized Attributes, Idealized Behaviors, Inspirational Motivation, Intellectual Stimulation, and Individualized Consideration – are considered to be transformational leadership behaviors.
Effectiveness as compared to other leadership styles
Studies have shown that transformational leadership styles are associated with positive outcomes in relation to other leadership styles. It is suggested that transformational leadership augments transactional in predicating effects on follower satisfaction and performance. According to studies performed by Lowe, Kroeck, and Sivasubramaniam, productivity (or Idealized Influence) was found to be a variable that was most strongly related to leader effectiveness among MLQ scales. Other studies show that transformational leadership is positively associated with employee outcomes including commitment, role clarity, and well-being. However, the effectiveness of transformational leadership varies by the situational contexts. For example, it can be more effective when applied to smaller, privately held firms than complex organizations based on its outreach effect with members of the organization. However, it can be concluded that transformational leadership has a positive effect on organizational effectiveness. This is because transformational leaders can encourage and facilitate change in their subordinates and encourage their development and creativity.
Difference between a Manager and a Leader
Managers are the doers within an organization, group or community. They are tasked with executing the vision by assigning roles and responsibilities to others. They track progress, assess current state and identify what it takes to achieve the desired outcome. Leaders are not Managers by default. Leaders are usually visionaries who have identified a need to change and are committed to see changes implemented to fruition.
Transactional leadership
In contrast to transformational leadership, transactional leadership styles focus on the use of rewards and punishments in order to achieve compliance from followers. According to Burns, the transforming approach creates significant change
in the life of people and organizations. It redesigns perceptions and values, and changes expectations and aspirations
of employees. Unlike in the transactional approach, it is not based on a "give and take" relationship, but on the leader's personality, traits and ability to make a change through example, articulation of an energizing vision and
challenging goals.
Transformational leaders look towards changing the future to inspire followers and accomplish goals, whereas transactional leaders seek to maintain the status quo, not aiming for progress. Transactional leaders frequently get results from employees by using authority, while transformational leaders have a true vision for their company, are able to inspire people, and are entirely committed to their work. In summary, transformational leaders focus on vision, use charisma and enthusiasm for motivation, and are proactive in nature. On the other hand, transactional leaders focus on goals, use rewards and punishments for motivation, and are reactive in nature.
The MLQ does test for some transactional leadership elements – Contingent Reward and Management-by-Exception – and the results for these elements are often compared to those for the transformational elements tested by the MLQ. Studies have shown transformational leadership practices lead to higher satisfaction with leader among followers and greater leader effectiveness, while one transactional practice (contingent reward) lead to higher follower job satisfaction and leader job performance.
Laissez-faire leadership
In a laissez-faire leadership style, a person may be given a leadership position without providing leadership, which leaves followers to fend for themselves. This leads to subordinates having a free hand in deciding policies and methods.
Studies have shown that while transformational leadership styles are associated with positive outcomes, laissez-faire leadership is associated with negative outcomes, especially in terms of follower satisfaction with leader and leader effectiveness. Laissez-Faire leadership should not be confused with delegation of responsibilities, which is often associated with positive leadership; the main distinction of the Laissez-Faire style is an abdication of responsibility for the outcome when decisions are made by subordinates in the absence of managerial oversight. Also, other studies comparing the leadership styles of men and women have shown that female leaders tend to be more transformational with their leadership styles, whereas laissez-faire leadership is more prevalent in male leaders.
Comparison of Styles among Public and Private Companies
Lowe, Kroeck, and Sivasubramaniam (1996) conducted a meta-analysis combining data from studies in both the private and public sector. The results indicated a hierarchy of leadership styles and related subcomponents. Transformational Leadership characteristics were the most effective; in the following order of effectiveness from most to least: productive-inspiration, intellectual stimulation, and individual consideration. Transactional Leadership was the next most effective; in the following order of effectiveness from most to least: contingent reward and managing-by-exception. Laissez Faire leadership does not intentionally intervene, and as such, is not measured, and has no effectiveness score.
Table 2.3
Correlations With Effectiveness in Public and Private Organizations
Results of a meta-analysis of effectiveness of as adapted by Bass (2006) in Transformational Leadership.
Factors affecting use
Phipps suggests that the individual personality of a leader heavily affects their leadership style, specifically with regard to the following components of the Five-factor model of personality: openness to experience, conscientiousness, extraversion/introversion, agreeableness, and neuroticism/emotional stability (OCEAN).
Phipps also proposed that all the Big Five dimensions would be positively related to transformational leadership. Openness to experience allows the leader to be more accepting of novel ideas and thus more likely to stimulate the follower intellectually. Conscientious leaders are achievement oriented and thus more likely to motivate their followers to achieve organizational goals. Extraverted and agreeable individuals are more outgoing and pleasant, respectively, and more likely to have successful interpersonal relationships. Thus, they are more likely to influence their followers and to be considerate towards them. Emotionally stable leaders would be better able to influence their followers because their stability would enable them to be better role models to followers and to thoroughly engage them in the goal fulfillment process.
A specific example of cultural background affecting the effectiveness of transformational leadership would be Indian culture, where a nurturant-task style of leadership has been shown to be an effective leadership style. Singh and Bhandarker (1990) demonstrated that effective transformational leaders in India are likes heads of Indian families taking personal interest in the welfare of their followers. Leaders in Indian organizations are therefore more likely to exhibit transformational behaviors if their followers are more self-effacing in approaching the leaders. It is also hypothesized in general that subordinates’ being socialized to be less assertive, self-confident, and independent would enhance superiors’ exhibition of transformational leadership.
Follower characteristics, combined with their perceptions of the leader and their own situation, did appear to moderate the connection between transformational leadership and subordinates’ willingness to take charge and be good organizational citizens. For instance, if subordinates in a work group perceive their leader to be prototypical of them, then transformational leadership would have less of an impact on their willingness to engage in organizational citizenship behaviors. Likewise, if subordinates are goal oriented and possess a traditional view of the organizational hierarchy, they tend to be less affected by transformational leadership. Self-motivated employees are less likely to need transformational leaders to prod them into action, while “traditionalists” tend to see positive organizational citizenship as something expected given their roles as followers—not something they need to be “inspired” to do.
Evidence suggests that the above sets of factors act, in essence, as both inhibitors of and substitutes for transformational leadership. As inhibitors, the presence of any of these factors—either independently or especially collectively—could make the presence of a transformational leader “redundant” since followers’ positive behavior would instead be sparked by their own motivations or perceptions. On the other hand, when these factors are not present (e.g., employees in a work group do not see their leader as “one of us”), then transformational leadership is likely to have a much greater impact on subordinates. In essence, when such “favorable conditions” are not present, managers—and the organizations they work for—should see a better return on investment from transformational leadership.
It was shown that leader continuity enhanced the effect of transformational leadership on role clarity and commitment, indicating that it takes time before transformational leaders actually have an effect on employees. Furthermore, co-worker support enhanced the effect on commitment, reflecting the role of followers in the transformational leadership process. However, there are also factors that would serve to balance the exhibition of transformational leadership, including the organizational structure, ongoing change, the leaders’ working conditions, and the leaders' elevated commitment of organizational value.
Outcomes
Bernard Bass in Leadership and Performance Beyond Expectations states some leaders are only able to extract competent effort from their employees, while others inspire extraordinary effort. Transformational leadership is the key (Bass, 1985).
Implementing transformational leadership has many positive outcomes not only in the workplace but in other situations as well. Evidence shows that each of the previously talked about four components of transformational leadership are significantly associated with positive emotions and outcomes in the workplace as well as in team projects performed online. One recent study indicates that these four components are significantly associated with higher job satisfaction and the effectiveness of the employees. Both intellectual stimulation and inspirational motivation are associated with a higher degree of positive emotions such as enthusiasm, happiness, and a sense of pride in the follower's life and work.
Companies seem to be transforming everywhere; growth and culture change are a focus within their core strategies. It is not necessarily about cost structure, but about finding new ways to grow. Models need to be produced to help leaders create the future. Kent Thirty, CEO of DaVita, chose the name DaVita, Italian for “giving life,” and settled on a list of core values that included
service excellence, teamwork, accountability, and fun. A transformational leader inspires and follows the employee's self-interests, while a transactional leader manages and reinforces generally without employee consideration. Aligning the organization into transformational leaders by committing, being involved, and developing with the employees will lead to higher job satisfaction and motivation.
When transformational leadership was used in a nursing environment, researchers found that it led to an increase in organizational commitment. A separate study examined that way that transformational leadership and transactional leadership compare when implemented into an online class. The results of this study indicate that transformational leadership increases cognitive effort while transactional leadership decreases it.
Examples
Nelson Mandela
Nelson Mandela used transformational leadership principles while working to abolish apartheid and enforce change in South Africa. In 1995, he visited Betsie Verwoerd, the widow of the architect of apartheid Hendrik Verwoerd, at her home in Orania. Orania was an Afrikaner homeland and a striking anachronistic symbol of racial separation, and Mandela's recurring emphasis on forgiveness contributed toward the healing the prejudices of South Africa and as vast influence as a leader. In 2000, he was quoted as saying, "For all people who have found themselves in the position of being in jail and trying to transform society, forgiveness is natural because you have no time to be retaliative." This illustrates a common approach in the narratives of transformational leadership, of describing a collective or corporate effort in individualised terms, and pointing to the responsibility or opportunity for individuals to commit to making the effort a success. Such an approach is seen in community organising.
He also set an example for others to follow in terms of sacrifice and philanthropy. Schoemaker describes one such instance:
Future
The evolution of transformational leadership in the digital age is tied to the development of organizational leadership in an academic setting. As organizations move from position-based responsibilities to task-based responsibilities, transformational leadership is redefined to continue to develop individual commitment to organizational goals by aligning these goals with the interests of their leadership community. The academic community is a front-runner in this sense of redefining transformational leadership to suit these changes in job definition.
The future of transformational leadership is also related to political globalization and a more homogenous spectrum of economic systems under which organizations find themselves operating. Cultural and geographical dimensions of transformational leadership become blurred as globalization renders ethnically specific collectivist and individualistic effects of organizational behavior obsolete in a more diversified workplace.
The concept of transformational leadership needs further clarification, especially when a leader is labelled as a transformational or transactional leader. While discussing Jinnah's leadership style, Yousaf (2015) argued that it is not the number of followers, but the nature of the change that indicates whether a leader is transformational or transactional.
References
Leadership | 0.764172 | 0.995612 | 0.760819 |
Cynefin framework | The Cynefin framework is a conceptual framework used to aid decision-making. Created in 1999 by Dave Snowden when he worked for IBM Global Services, it has been described as a "sense-making device". is a Welsh word for 'habitat'.
Cynefin offers five decision-making contexts or "domains"—clear (known as simple until 2014, then obvious until being recently renamed), complicated, complex, chaotic, and confusion—that help managers to identify how they perceive situations and make sense of their own and other people's behaviour. The framework draws on research into systems theory, complexity theory, network theory and learning theories.
Background
Terminology
The idea of the Cynefin framework is that it offers decision-makers a "sense of place" from which to view their perceptions. is a Welsh word meaning 'habitat', 'haunt', 'acquainted', 'familiar'. Snowden uses the term to refer to the idea that we all have connections, such as tribal, religious and geographical, of which we may not be aware. It has been compared to the Māori word , meaning a place to stand, or the "ground and place which is your heritage and that you come from".
History
Snowden, then of IBM Global Services, began work on a Cynefin model in 1999 to help manage intellectual capital within the company. He continued developing it as European director of IBM's Institute of Knowledge Management, and later as founder and director of the IBM Cynefin Centre for Organizational Complexity, established in 2002. Cynthia Kurtz, an IBM researcher, and Snowden described the framework in detail the following year in a paper, "The new dynamics of strategy: Sense-making in a complex and complicated world", published in IBM Systems Journal.
The Cynefin Centre—a network of members and partners from industry, government and academia—began operating independently of IBM in 2004. In 2007 Snowden and Mary E. Boone described the Cynefin framework in the Harvard Business Review. Their paper, "A Leader's Framework for Decision Making", won them an "Outstanding Practitioner-Oriented Publication in OB" award from the Academy of Management's Organizational Behavior division.
Clear
The clear domain represents the "known knowns". This means that there are rules in place (or best practice), the situation is stable, and the relationship between cause and effect is clear: if you do X, expect Y. The advice in such a situation is to "sense–categorize–respond": establish the facts ("sense"), categorize, then respond by following the rule or applying best practice. Snowden and Boone (2007) offer the example of loan-payment processing. An employee identifies the problem (for example, a borrower has paid less than required), categorizes it (reviews the loan documents), and responds (follows the terms of the loan). According to Thomas A. Stewart,
This is the domain of legal structures, standard operating procedures, practices that are proven to work. Never draw to an inside straight. Never lend to a client whose monthly payments exceed 35 percent of gross income. Never end the meeting without asking for the sale. Here, decision-making lies squarely in the realm of reason: Find the proper rule and apply it.
Snowden and Boone write that managers should beware of forcing situations into this domain by oversimplifying, by "entrained thinking" (being blind to new ways of thinking), or by becoming complacent (see human error). When success breeds complacency ("best practice is, by definition, past practice"), there can be a catastrophic clockwise shift into the chaotic domain. They recommend that leaders provide a communication channel, if necessary an anonymous one, so that dissenters (for example, within a workforce) can warn about complacency.
Complicated
The complicated domain consists of the "known unknowns". The relationship between cause and effect requires analysis or expertise; there are a range of right answers. The framework recommends "sense–analyze–respond": assess the facts, analyze, and apply the appropriate good operating practice. According to Stewart: "Here it is possible to work rationally toward a decision, but doing so requires refined judgment and expertise. ... This is the province of engineers, surgeons, intelligence analysts, lawyers, and other experts. Artificial intelligence copes well here: Deep Blue plays chess as if it were a complicated problem, looking at every possible sequence of moves."
Complex
The complex domain represents the "unknown unknowns". Cause and effect can only be deduced in retrospect, and there are no right answers. "Instructive patterns ... can emerge," write Snowden and Boone, "if the leader conducts experiments that are safe to fail." Cynefin calls this process "probe–sense–respond". Hard insurance cases are one example. "Hard cases ... need human underwriters," Stewart writes, "and the best all do the same thing: Dump the file and spread out the contents." Stewart identifies battlefields, markets, ecosystems and corporate cultures as complex systems that are "impervious to a reductionist, take-it-apart-and-see-how-it-works approach, because your very actions change the situation in unpredictable ways."
Chaotic
In the chaotic domain, cause and effect are unclear. Events in this domain are "too confusing to wait for a knowledge-based response", writes Patrick Lambe. "Action—any action—is the first and only way to respond appropriately." In this context, managers "act–sense–respond": act to establish order; sense where stability lies; respond to turn the chaotic into the complex. Snowden and Boone write:
In the chaotic domain, a leader’s immediate job is not to discover patterns but to staunch the bleeding. A leader must first act to establish order, then sense where stability is present and from where it is absent, and then respond by working to transform the situation from chaos to complexity, where the identification of emerging patterns can both help prevent future crises and discern new opportunities. Communication of the most direct top-down or broadcast kind is imperative; there’s simply no time to ask for input.
The September 11 attacks were an example of the chaotic category. Stewart offers others: "the firefighter whose gut makes him turn left or the trader who instinctively sells when the news about the stock seems too good to be true." One crisis executive said of the collapse of Enron: "People were afraid. ... Decision-making was paralyzed. ... You've got to be quick and decisive—make little steps you know will succeed, so you can begin to tell a story that makes sense."
Snowden and Boone give the example of the 1993 Brown's Chicken massacre in Palatine, Illinois—when robbers murdered seven employees in Brown's Chicken and Pasta restaurant—as a situation in which local police faced all the domains. Deputy Police Chief Walt Gasior had to act immediately to stem the early panic (chaotic), while keeping the department running (clear), calling in experts (complicated), and maintaining community confidence in the following weeks (complex).
Confusion
The dark confusion domain in the centre represents situations where there is no clarity about which of the other domains apply (this domain has also been known as disordered in earlier versions of the framework). By definition it is hard to see when this domain applies. "Here, multiple perspectives jostle for prominence, factional leaders argue with one another, and cacophony rules", write Snowden and Boone. "The way out of this realm is to break down the situation into constituent parts and assign each to one of the other four realms. Leaders can then make decisions and intervene in contextually appropriate ways."
Moving through domains
As knowledge increases, there is a "clockwise drift" from chaotic through complex and complicated to clear. Similarly, a "buildup of biases", complacency or lack of maintenance can cause a "catastrophic failure": a clockwise movement from clear to chaotic, represented by the "fold" between those domains. There can be counter-clockwise movement as people die and knowledge is forgotten, or as new generations question the rules; and a counter-clockwise push from chaotic to clear can occur when a lack of order causes rules to be imposed suddenly.
Applications and reception
Cynefin was used by its IBM developers in policy-making, product development, market creation, supply chain management, branding and customer relations. Later uses include analysing the impact of religion on policymaking within the George W. Bush administration, emergency management, network science and the military, the management of food-chain risks, homeland security in the United States, agile software development, and policing the Occupy Movement in the United States.
It has also been used in health-care research, including to examine the complexity of care in the British National Health Service, the nature of knowledge in health care, and the fight against HIV/AIDS in South Africa. In 2017 the RAND Corporation used the Cynefin framework in a discussion of theories and models of decision making. The European Commission has published a field guide to use Cynefin as a "guide to navigate crisis".
Criticism of Cynefin includes that the framework is difficult and confusing, needs a more rigorous foundation, and covers too limited a selection of possible contexts. Another criticism is that terms such as known, knowable, sense, and categorize are ambiguous.
Prof Simon French recognizes "the value of the Cynefin framework in categorising decision contexts and identifying how to address many uncertainties in an analysis" and as such believes it builds on seminal works such as Russell L. Ackoff's Scientific Method: optimizing applied research decisions (1962), C. West Churchman's Inquiring Systems (1967), Rittel and Webber's Dilemmas in a General Theory of Planning (1973), Douglas John White's Decision Methodology (1975), John Tukey's Exploratory data analysis (1977), Mike Pidd's Tools for Thinking: Modelling in Management Science (1996), and Ritchey's General Morphological Analysis (1998).
Firestone and McElroy argue that Cynefin is a model of sensemaking rather than a full model of knowledge management and processing.
Cynefin and theory of constraints
Steve Holt compares Cynefin to the theory of constraints. The theory of constraints argues that most systems outcomes are limited by certain bottlenecks (constraints) and improvements away from these constraints tend to be counterproductive because they just place more strain on a constraint. Holt places the theory of constraints within the Cynefin framing by arguing the theory of constraints moves from complex situations to complicated ones by using abductive reasoning and intuition then logic to creating an understanding, before creating a probe to test understanding.
Cynefin defines several types of constraints. Fixed constraints stipulate that actions must be done in a certain way in a certain order and apply in the clear domain, governing constraints are looser and act more like rules or policies applying in the complicated domain, enabling constraints that operate in the complex domain are constraints that allow a system to function but do not control the entire process.Holt argues that constraints in the theory of constraints correspond the Cynefin's fixed and governing constraints. Holt argues that injections in the theory of constraints correspond to enabling constraints.
See also
I-Space (conceptual framework)
Inquiry
Karl E. Weick
Morphological analysis (problem-solving)
Narrative inquiry
OODA loop
SECI model of knowledge dimensions
There are known knowns
Uncertainty
Volatility, uncertainty, complexity and ambiguity
VPEC-T
Wicked problem
Notes
References
IBM
Knowledge management
Strategy consulting | 0.762681 | 0.997537 | 0.760802 |
Equifinality | Equifinality is the principle that in open systems a given end state can be reached by many potential means. The term and concept is due to the German Hans Driesch, the developmental biologist, later applied by the Austrian Ludwig von Bertalanffy, the founder of general systems theory, and by William T. Powers, the founder of perceptual control theory. Driesch and von Bertalanffy prefer this term, in contrast to "goal", in describing complex systems' similar or convergent behavior. Powers simply emphasised the flexibility of response, since it emphasizes that the same end state may be achieved via many different paths or trajectories.
In closed systems, a direct cause-and-effect relationship exists between the initial condition and the final state of the system: When a computer's 'on' switch is pushed, the system powers up. Open systems (such as biological and social systems), however, operate quite differently. The idea of equifinality suggests that similar results may be achieved with different initial conditions and in many different ways. This phenomenon has also been referred to as isotelesis (from Greek ἴσος isos "equal" and τέλεσις telesis: "the intelligent direction of effort toward the achievement of an end") when in games involving superrationality.
Overview
In business, equifinality implies that firms may establish similar competitive advantages based on substantially different competencies.
In psychology, equifinality refers to how different early experiences in life (e.g., parental divorce, physical abuse, parental substance abuse) can lead to similar outcomes (e.g., childhood depression). In other words, there are many different early experiences that can lead to the same psychological disorder.
In archaeology, equifinality refers to how different historical processes may lead to a similar outcome or social formation. For example, the development of agriculture or the bow and arrow occurred independently in many different areas of the world, yet for different reasons and through different historical trajectories. This highlights that generalizations based on cross-cultural comparisons cannot be made uncritically.
In Earth and environmental Sciences, two general types of equifinality are distinguished: process equifinality (concerned with real-world open systems) and model equifinality (concerned with conceptual open systems). For example, process equifinality in geomorphology indicates that similar landforms might arise as a result of quite different sets of processes. Model equifinality refers to a condition where distinct configurations of model components (e.g. distinct model parameter values) can lead to similar or equally acceptable simulations (or representations of the real-world process of interest). This similarity or equal acceptability is conditional on the objective functions and criteria of acceptability defined by the modeler. While model equifinality has various facets, model parameter and structural equifinality are mostly known and focused in modeling studies. Equifinality (particularly parameter equifinality) and Monte Carlo experiments are the foundation of the GLUE method that was the first generalised method for uncertainty assessment in hydrological modeling. GLUE is now widely used within and beyond environmental modeling.
See also
GLUE – Generalized Likelihood Uncertainty Estimation (when modeling environmental systems there are many different model structures and parameter sets that may be behavioural or acceptable in reproducing the behaviour of that system)
TMTOWTDI – Computer programming maxim: "there is more than one way to do it"
Underdetermination
Consilience
Convergent evolution
Teleonomy
Degeneracy (biology)
Kruskal's principle
Multicollinearity
References
Publications
Bertalanffy, Ludwig von, General Systems Theory, 1968
Beven, K.J. and Binley, A.M., 1992. The future of distributed models: model calibration and uncertainty prediction, Hydrological Processes, 6, pp. 279–298.
Beven, K.J. and Freer, J., 2001a. Equifinality, data assimilation, and uncertainty estimation in mechanistic modelling of complex environmental systems, Journal of Hydrology, 249, 11–29.
Croft, Gary W., Glossary of Systems Theory and Practice for the Applied Behavioral Sciences, Syntropy Incorporated, Freeland, WA, Prepublication Review Copy, 1996
Durkin, James E. (ed.), Living Groups: Group Psychotherapy and General System Theory, Brunner/Mazel, New York, 1981
Mash, E. J., & Wolfe, D. A. (2005). Abnormal Child Psychology (3rd edition). Wadsworth Canada. pp. 13–14.
Weisbord, Marvin R., Productive Workplaces: Organizing and Managing for Dignity, Meaning, and Community, Jossey-Bass Publishers, San Francisco, 1987
Tang, J.Y. and Zhuang, Q. (2008). Equifinality in parameterization of process-based biogeochemistry models: A significant uncertainty source to the estimation of regional carbon dynamics, J. Geophys. Res., 113, G04010.
Systems theory | 0.780199 | 0.975139 | 0.760802 |
Anomie | In sociology, anomie or anomy is a social condition defined by an uprooting or breakdown of any moral values, standards or guidance for individuals to follow. Anomie is believed to possibly evolve from conflict of belief systems and causes breakdown of social bonds between an individual and the community (both economic and primary socialization).
The term, commonly understood to mean normlessness, is believed to have been popularized by French sociologist Émile Durkheim in his influential book Suicide (1897). Émile Durkheim suggested that Protestants exhibited a greater degree of anomie than Catholics. However, Durkheim first introduced the concept of anomie in his 1893 work The Division of Labour in Society. Durkheim never used the term normlessness; rather, he described anomie as "derangement", and "an insatiable will." Durkheim used the term "the malady of the infinite" because desire without limit can never be fulfilled; it only becomes more intense.
For Durkheim, anomie arises more generally from a mismatch between personal or group standards and wider social standards; or from the lack of a social ethic, which produces moral deregulation and an absence of legitimate aspirations, i.e.:
History
In 1893, Durkheim introduced the concept of anomie to describe the mismatch of collective guild labour to evolving societal needs when the guild was homogeneous in its constituency. He equated homogeneous (redundant) skills to mechanical solidarity whose inertia hindered adaptation. He contrasted this with the self-regulating behaviour of a division of labour based on differences in constituency, equated to organic solidarity, whose lack of inertia made it sensitive to needed changes.
Durkheim observed that the conflict between the evolved organic division of labour and the homogeneous mechanical type was such that one could not exist in the presence of the other. When solidarity is organic, anomie is impossible, as sensitivity to mutual needs promotes evolution in the division of labour:Durkheim contrasted the condition of anomie as being the result of a malfunction of organic solidarity after the transition to mechanical solidarity:
Durkheim's use of anomie was in regards to the phenomenon of industrialization—mass-regimentation that could not adapt due to its own inertia. More specifically, its resistance to change causes disruptive cycles of collective behavior (e.g. economics) due to the necessity of a prolonged buildup of sufficient force or momentum to overcome the inertia.
Later in 1897, in his studies of suicide, Durkheim associated anomie to the influence of a lack of norms or norms that were too rigid. However, such normlessness or norm-rigidity was a symptom of anomie, caused by the lack of differential adaptation that would enable norms to evolve naturally due to self-regulation, either to develop norms where none existed or to change norms that had become rigid and obsolete. Durkheim found that Protestant communities have noticeably higher suicide rates than Catholic ones, and justified it with individualism and lack of social cohesion prevalent amongst Protestants, creating poorly integrated society and making Protestants less likely to develop close communal ties that would be crucial in times of hardship. Conversely, he states that the Catholic faith binds individuals stronger together and builds strong social ties, decreasing the risk of suicide and alienation. In this, Durkheim argued that religion is much more important than culture in regards to anomic suicide. This allowed Durkheim to successfully tie social cohesion to suicide rates:
In 1938, Robert K. Merton linked anomie with deviance, arguing that the discontinuity between culture and structure have the dysfunctional consequence of leading to deviance within society. He described 5 types of deviance in terms of the acceptance or rejection of social goals and the institutionalized means of achieving them.
Etymology
The term anomie—"a reborrowing with French spelling of anomy"—comes from , namely the privative alpha prefix (a-, 'without'), and nomos. The Greeks distinguished between nomos, and arché. For example, a monarch is a single ruler but he may still be subject to, and not exempt from, the prevailing laws, i.e. nomos. In the original city state democracy, the majority rule was an aspect of arché because it was a rule-based, customary system, which may or may not make laws, i.e. nomos. Thus, the original meaning of anomie defined anything or anyone against or outside the law, or a condition where the current laws were not applied resulting in a state of illegitimacy or lawlessness.
The contemporary English understanding of the word anomie can accept greater flexibility in the word "norm", and some have used the idea of normlessness to reflect a similar situation to the idea of anarchy. However, as used by Émile Durkheim and later theorists, anomie is a reaction against or a retreat from the regulatory social controls of society, and is a completely separate concept from anarchy, which consists of the absence of the roles of rulers and submitted.
Social disorder
Nineteenth-century French pioneer sociologist Émile Durkheim borrowed the term anomie from French philosopher Jean-Marie Guyau. Durkheim used it in his influential book Suicide (1897) in order to outline the social (and not individual) causes of suicide, characterized by a rapid change of the standards or values of societies (often erroneously referred to as normlessness), and an associated feeling of alienation and purposelessness. He believed that anomie is common when the surrounding society has undergone significant changes in its economic fortunes, whether for better or for worse and, more generally, when there is a significant discrepancy between the ideological theories and values commonly professed and what was actually achievable in everyday life. This was contrary to previous theories on suicide which generally maintained that suicide was precipitated by negative events in a person's life and their subsequent depression.
In Durkheim's view, traditional religions often provided the basis for the shared values which the anomic individual lacks. Furthermore, he argued that the division of labor that had been prevalent in economic life since the Industrial Revolution led individuals to pursue egoistic ends rather than seeking the good of a larger community. Robert King Merton also adopted the idea of anomie to develop strain theory, defining it as the discrepancy between common social goals and the legitimate means to attain those goals. In other words, an individual suffering from anomie would strive to attain the common goals of a specific society yet would not be able to reach these goals legitimately because of the structural limitations in society. As a result, the individual would exhibit deviant behavior. Friedrich Hayek notably uses the word anomie with this meaning.
According to one academic survey, psychometric testing confirmed a link between anomie and academic dishonesty among university students, suggesting that universities needed to foster codes of ethics among students in order to curb it. In another study, anomie was seen as a "push factor" in tourism.
As an older variant, the 1913 Webster's Dictionary reports use of the word anomie as meaning "disregard or violation of the law." However, anomie as a social disorder is not to be confused with anarchy: proponents of anarchism claim that anarchy does not necessarily lead to anomie and that hierarchical command actually increases lawlessness. Some anarcho-primitivists argue that complex societies, particularly industrial and post-industrial societies, directly cause conditions such as anomie by depriving the individual of self-determination and a relatively small reference group to relate to, such as the band, clan or tribe.
In 2003, José Soltero and Romeo Saravia analyzed the concept of anomie in regards to Protestantism and Catholicism in El Salvador. Massive displacement of population in the 1970s, economic and political crises as well as cycles of violence are credited with radically changing the religious composition of the country, rendering it one of the most Protestant countries in Latin America. According to Soltero and Saravia, the rise of Protestantism is conversationally claimed to be caused by a Catholic failure to "address the spiritual needs of the poor" and the Protestant "deeper quest for salvation, liberation, and eternal life". However, their research does not support these claims, and showed that Protestantism is not more popular amongst the poor. Their findings do confirm the assumptions of anomie, with Catholic communities of El Salvador enjoying high social cohesion, while the Protestant communities have been associated with poorer social integration, internal migration and tend to be places deeply affected by the Salvadoran Civil War. Additionally, Soltero and Saravia found that Salvadoran Catholicism is tied to social activism, liberation theology and the political left, as opposed to the "right wing political orientation, or at least a passive, personally inward orientation, expressed by some Protestant churches". They conclude that their research contradicts the theory that Protestantism responds to the spiritual needs of the poor more adequately than Catholicism, while also disproving the claim that Protestantism appeals more to women:
The study by Soltero and Saravia has also found a link between Protestantism and no access to healthcare:
Synnomie
Freda Adler coined synnomie as the opposite of anomie. Using Émile Durkheim's concept of social solidarity and collective consciousness, Adler defined synnomie as "a congruence of norms to the point of harmonious accommodation".
Adler described societies in a synnomie state as "characterized by norm conformity, cohesion, intact social controls and norm integration". Social institutions such as the family, religion and communities, largely serve as sources of norms and social control to maintain a synnomic society.
In culture
In Albert Camus's existentialist novel The Stranger, Meursault—the bored, alienated protagonist—struggles to construct an individual system of values as he responds to the disappearance of the old. He exists largely in a state of anomie, as seen from the apathy evinced in the opening lines: "" ("Today mum died. Or maybe yesterday, I don't know").
Fyodor Dostoyevsky expresses a similar concern about anomie in his novel The Brothers Karamazov. The Grand Inquisitor remarks that in the absence of God and immortal life, everything would be lawful. In other words, that any act becomes thinkable, that there is no moral compass, which leads to apathy and detachment.
In The Ink Black Heart of the Cormoran Strike novels, written by J. K. Rowling under the pseudonym Robert Galbraith, the main antagonist goes by the online handle of "Anomie".
See also
References
Sources
Durkheim, Émile. 1893. The Division of Labour in Society.
Marra, Realino. 1987. Suicidio, diritto e anomia. Immagini della morte volontaria nella civiltà occidentale. Napoli: Edizioni Scientifiche Italiane.
—— 1989. "Geschichte und aktuelle Problematik des Anomiebegriffs." Zeitschrift für Rechtssoziologie 11(1):67–80.
Orru, Marco. 1983. "The Ethics of Anomie: Jean Marie Guyau and Émile Durkheim." British Journal of Sociology 34(4):499–518.
Riba, Jordi. 1999. La Morale Anomique de Jean-Marie Guyau. L'Harmattan. .
External links
Deflem, Mathieu. 2015. "Anomie: History of the Concept." pp. 718–721 in International Encyclopedia of Social and Behavioral Sciences, Second Edition (Volume 1), edited by James D. Wright. Oxford, UK: Elsevier.
"Anomie" discussed at the Émile Durkheim Archive.
Featherstone, Richard, and Mathieu Deflem. 2003. "Anomie and Strain: Context and Consequences of Merton's Two Theories." Sociological Inquiry 73(4):471–489, 2003.
Deviance (sociology)
Émile Durkheim
Social philosophy
Sociological terminology
Sociological theories | 0.762904 | 0.997212 | 0.760777 |
Cultural evolution | Cultural evolution is an evolutionary theory of social change. It follows from the definition of culture as "information capable of affecting individuals' behavior that they acquire from other members of their species through teaching, imitation and other forms of social transmission". Cultural evolution is the change of this information over time.
Cultural evolution, historically also known as sociocultural evolution, was originally developed in the 19th century by anthropologists stemming from Charles Darwin's research on evolution. Today, cultural evolution has become the basis for a growing field of scientific research in the social sciences, including anthropology, economics, psychology, and organizational studies. Previously, it was believed that social change resulted from biological adaptations; anthropologists now commonly accept that social changes arise in consequence of a combination of social, environmental, and biological influences (viewed from a nature vs nurture framework).
There have been a number of different approaches to the study of cultural evolution, including dual inheritance theory, sociocultural evolution, memetics, cultural evolutionism, and other variants on cultural selection theory. The approaches differ not just in the history of their development and discipline of origin but in how they conceptualize the process of cultural evolution and the assumptions, theories, and methods that they apply to its study. In recent years, there has been a convergence of the cluster of related theories towards seeing cultural evolution as a unified discipline in its own right.
History
Aristotle thought that development of cultural form (such as poetry) stops when it reaches its maturity. In 1873, in Harper's New Monthly Magazine, it was written: "By the principle which Darwin describes as natural selection short words are gaining the advantage over long words, direct forms of expression are gaining the advantage over indirect, words of precise meaning the advantage of the ambiguous, and local idioms are everywhere in disadvantage".
Cultural evolution, in the Darwinian sense of variation and selective inheritance, could be said to trace back to Darwin himself. He argued for both customs (1874 p. 239) and "inherited habits" as contributing to human evolution, grounding both in the innate capacity for acquiring language.
Darwin's ideas, along with those of such as Comte and Quetelet, influenced a number of what would now be called social scientists in the late nineteenth and early twentieth centuries. Hodgson and Knudsen single out David George Ritchie and Thorstein Veblen, crediting the former with anticipating both dual inheritance theory and universal Darwinism. Despite the stereotypical image of social Darwinism that developed later in the century, neither Ritchie nor Veblen were on the political right.
The early years of the 20th century and particularly World War I saw biological concepts and metaphors shunned by most social sciences. Even uttering the word evolution carried "serious risk to one's intellectual reputation." Darwinian ideas were also in decline following the rediscovery of Mendelian genetics but were revived, especially by Fisher, Haldane, and Wright, who developed the first population genetic models and as it became known the modern synthesis.
Cultural evolutionary concepts, or even metaphors, revived more slowly. If there were one influential individual in the revival it was probably Donald T. Campbell. In 1960 he drew on Wright to draw a parallel between genetic evolution and the "blind variation and selective retention" of creative ideas; work that was developed into a full theory of "socio-cultural evolution" in 1965 (a work that includes references to other works in the then current revival of interest in the field). Campbell (1965 26) was clear that he perceived cultural evolution not as an analogy "from organic evolution per se, but rather from a general model for quasiteleological processes for which organic evolution is but one instance".
Others pursued more specific analogies notably the anthropologist F. T. (Ted) Cloak who argued in 1975 for the existence of learnt cultural instructions (cultural corpuscles or i-culture) resulting in material artefacts (m-culture) such as wheels. The argument thereby introduced as to whether cultural evolution requires neurological instructions continues to the present day .
Unilinear theory
In the 19th century cultural evolution was thought to follow a unilineal pattern whereby all cultures progressively develop over time. The underlying assumption was that Cultural Evolution itself led to the growth and development of civilization.
Thomas Hobbes in the 17th century declared indigenous culture to have "no arts, no letters, no society" and he described facing life as "solitary, poor, nasty, brutish, and short." He, like other scholars of his time, reasoned that everything positive and esteemed resulted from the slow development away from this poor lowly state of being.
Under the theory of unilinear Cultural Evolution, all societies and cultures develop on the same path. The first to present a general unilineal theory was Herbert Spencer. Spencer suggested that humans develop into more complex beings as culture progresses, where people originally lived in "undifferentiated hordes" culture progresses and develops to the point where civilization develops hierarchies. The concept behind unilinear theory is that the steady accumulation of knowledge and culture leads to the separation of the various modern day sciences and the build-up of cultural norms present in modern-day society.
In Lewis H. Morgan's book Ancient Society (1877), Morgan labels seven differing stages of human culture: lower, middle, and upper savagery; lower, middle, and upper barbarism; and civilization. He justifies this staging classification by referencing societies whose cultural traits resembled those of each of his stage classifications of the cultural progression. Morgan gave no example of lower savagery, as even at the time of writing few examples remained of this cultural type. At the time of expounding his theory, Morgan's work was highly respected and became a foundation for much of anthropological study that was to follow.
Cultural particularism
There began a widespread condemnation of unilinear theory in the late 19th century. Unilinear cultural evolution implicitly assumes that culture was borne out of the United States and Western Europe. That was seen by many to be racist, as it assumed that some individuals and cultures were more evolved than others.
Franz Boas, a German-born anthropologist, was the instigator of the movement known as 'cultural particularism' in which the emphasis shifted to a multilinear approach to cultural evolution. That differed to the unilinear approach that used to be favoured in the sense that cultures were no longer compared, but they were assessed uniquely. Boas, along with several of his pupils, notably A.L. Kroeber, Ruth Benedict and Margaret Mead, changed the focus of anthropological research to the effect that instead of generalizing cultures, the attention was now on collecting empirical evidence of how individual cultures change and develop.
Multilinear theory
Cultural particularism dominated popular thought for the first half of the 20th century before American anthropologists, including Leslie A. White, Julian H. Steward, Marshall D. Sahlins, and Elman R. Service, revived the debate on cultural evolution. These theorists were the first to introduce the idea of multilinear cultural evolution.
Under multilinear theory, there are no fixed stages (as in unilinear theory) towards cultural development. Instead, there are several stages of differing lengths and forms. Although, individual cultures develop differently and cultural evolution occurs differently, multilinear theory acknowledges that cultures and societies do tend to develop and move forward.
Leslie A. White focused on the idea that different cultures had differing amounts of 'energy', White argued that with greater energy societies could possess greater levels of social differentiation. He rejected separation of modern societies from primitive societies. In contrast, Steward argued, much like Darwin's theory of evolution, that culture adapts to its surroundings. 'Evolution and Culture' by Sahlins and Service is an attempt to condense the views of White and Steward into a universal theory of multilinear evolution.
Robert Wright recognized the inevitable development of cultures. He proposed that population growth was a crucial component of cultural evolution. Population has a symbiotic relationship with technological, economic, and political development.
Memetics
Richard Dawkins' 1976 book The Selfish Gene proposed the concept of the "meme", which is analogous to that of the gene. A meme is an idea-replicator that can reproduce itself, by jumping from mind to mind via the process of one human learning from another via imitation. Along with the "virus of the mind" image, the meme might be thought of as a "unit of culture" (an idea, belief, pattern of behaviour, etc.), which spreads among the individuals of a population. The variation and selection in the copying process enables Darwinian evolution among memeplexes and therefore is a candidate for a mechanism of cultural evolution. As memes are "selfish" in that they are "interested" only in their own success, they could well be in conflict with their biological host's genetic interests. Consequently, a "meme's eye" view might account for certain evolved cultural traits, such as suicide terrorism, that are successful at spreading the meme of martyrdom, but fatal to their hosts and often other people.
Evolutionary epistemology
"Evolutionary epistemology" can also refer to a theory that applies the concepts of biological evolution to the growth of human knowledge and argues that units of knowledge themselves, particularly scientific theories, evolve according to selection. In that case, a theory, like the germ theory of disease, becomes more or less credible according to changes in the body of knowledge surrounding it.
One of the hallmarks of evolutionary epistemology is the notion that empirical testing alone does not justify the pragmatic value of scientific theories but rather that social and methodological processes select those theories with the closest "fit" to a given problem. The mere fact that a theory has survived the most rigorous empirical tests available does not, in the calculus of probability, predict its ability to survive future testing. Karl Popper used Newtonian physics as an example of a body of theories so thoroughly confirmed by testing as to be considered unassailable, but they were nevertheless improved on by Albert Einstein's bold insights into the nature of space-time. For the evolutionary epistemologist, all theories are true only provisionally, regardless of the degree of empirical testing they have survived.
Popper is considered by many to have given evolutionary epistemology its first comprehensive treatment, but Donald T. Campbell had coined the phrase in 1974.
Dual inheritance theory
Criticism and controversy
As a relatively new and growing scientific field, cultural evolution is undergoing much formative debate. Some of the prominent conversations are revolving around Universal Darwinism, dual inheritance theory, and memetics.
More recently, cultural evolution has drawn conversations from multi-disciplinary sources with movement towards a unified view between the natural and social sciences. There remains some accusation of biological reductionism, as opposed to cultural naturalism, and scientific efforts are often mistakenly associated with Social Darwinism. However, some useful parallels between biological and social evolution still appear to be found.
Researchers Alberto Acerbi and Alex Mesoudi's criticism of Cultural Evolution lies in the ambiguity surrounding the analogy between cultural and genetic evolution. They clarify the distinction between cultural selection (high-fidelity replication of traits) and cultural attraction (reconstruction of traits with lower fidelity). They argue that both mechanisms coexist in cultural evolution, making it essential to empirically determine their prevalence in different contexts, addressing confusion in the field.
Criticism of historic approaches to cultural evolution
Cultural evolution has been criticized over the past two centuries that it has advanced its development into the form it holds today. Morgan's theory of evolution implies that all cultures follow the same basic pattern. Human culture is not linear, different cultures develop in different directions and at differing paces, and it is not satisfactory or productive to assume cultures develop in the same way.
A further key critique of cultural evolutionism is what is known as "armchair anthropology". The name results from the fact that many of the anthropologists advancing theories had not seen first hand the cultures they were studying. The research and data collected was carried out by explorers and missionaries as opposed to the anthropologists themselves. Edward Tylor was the epitome of that and did very little of his own research. Cultural evolution is also criticized for being ethnocentric; cultures are still seen as attempting to emulate western civilization. Under ethnocentricity, primitive societies are said to not yet be at the cultural levels of other Western societies.
Much of the criticism aimed at cultural evolution is focused on the unilinear approach to social change. Broadly speaking in the second half of the 20th century the criticisms of cultural evolution have been answered by the multilinear theory. Ethnocentricity, for example, is more prevalent under the unilinear theory.
Some recent approaches, such as dual inheritance theory, make use of empirical methods including psychological and animal studies, field site research, and computational models.
See also
Notes
References
Fernlund, Kevin Jon. "The Great Battle of the Books between the Cultural Evolutionists and the Cultural Relativists, from the Beginning of Infinity to the End of History” in the Journal of Big History 4, 3 (2020): 6-30.
Further reading
Early foundational books
Modern review books
Mesoudi, A. (2011). Cultural evolution: how Darwinian theory can explain human culture and synthesize the social sciences. University of Chicago Press
In evolutionary economics
In evolutionary biology
Jablonka, E., Lamb, M.J., (2014). Evolution in Four Dimensions, revised edition: Genetic, Epigenetic, Behavioral, and Symbolic Variation in the History of Life. MIT Press.
High-profile empirical work
In organisational studies
Organisational memetics
Evolutionary linguistics
External links
http://plato.stanford.edu/entries/evolution-cultural/
Cultural Evolution Society
Sociocultural evolution theory | 0.767822 | 0.990798 | 0.760757 |
Structure of observed learning outcome | The structure of observed learning outcomes (SOLO) taxonomy is a model that describes levels of increasing complexity in students' understanding of subjects. It was proposed by John B. Biggs and Kevin F. Collis.
The model consists of five levels of understanding:
Pre-structural – The task is not attacked appropriately; the student hasn't really understood the point and uses too simple a way of going about it. Students in the pre-structural stage of understanding usually respond to questions with irrelevant comments.
Uni-structural – The student's response only focuses on one relevant aspect. Students in the uni-structural stage of understanding usually give slightly relevant but vague answers that lack depth.
Multi-structural – The student's response focuses on several relevant aspects but they are treated independently and additively. Assessment of this level is primarily quantitative. Students in the multi-structural stage may know the concept in tidbits but don't know how to present or explain it.
Relational – The different aspects have become integrated into a coherent whole. This level is what is normally meant by an adequate understanding of some topic. At the relational stage, students can identify various patterns & view a topic from distinct perspectives.
Extended abstract – The previous integrated whole may be conceptualised at a higher level of abstraction and generalised to a new topic or area. At this stage, students may apply the classroom concepts in real life.
See also
References
External links
Teaching Teaching & Understanding Understanding (short-film about Constructive Alignment and The SOLO Taxonomy)
Educational technology
Educational classification systems | 0.781928 | 0.972911 | 0.760746 |
Cultural-historical activity theory | Cultural-historical activity theory (CHAT) is a theoretical framework to conceptualize and analyse the relationship between cognition (what people think and feel) and activity (what people do). The theory was founded by L. S. Vygotsky and Aleksei N. Leontiev, who were part of the cultural-historical school of Russian psychology. The Soviet philosopher of psychology, S.L. Rubinshtein, developed his own variant of activity as a philosophical and psychological theory, independent from Vygotsky's work. (V. Lektorsky in Engeström, Miettinen & Punamäki 1999, p. 66;Brushlinskii, A. V. 2004) Political restrictions in Stalinist Russia had suppressed the cultural-historical psychology – also known as the Vygotsky School – in the mid-thirties. This meant that the core "activity" concept remained confined to the field of psychology. Vygotsky's insight into the dynamics of consciousness was that it is essentially subjective and shaped by the history of each individual's social and cultural experiences. Since the 1990s, CHAT has attracted a growing interest among academics worldwide. Elsewhere CHAT has been described as "a cross-disciplinary framework for studying how humans transform natural and social reality, including themselves, as an ongoing culturally and historically situated, materially and socially mediated process". CHAT explicitly incorporates the mediation of activities by society, which means that it can be used to link concerns normally independently examined by sociologists of education and (social) psychologists. (Roth, Radford & Lacroix 2012) Core ideas are: 1) humans act collectively, learn by doing, and communicate in and via actions; 2) humans make, employ, and adapt tools to learn and communicate; and 3) community is central to the process of making and interpreting meaning – and thus to all forms of learning, communicating, and acting.
The term CHAT was coined by Michael Cole and popularized by to promote the unity of what, by the 1990s, had become a variety of currents harking back to Vygotsky's work. Prominent among those currents are Cultural-historical psychology, in use since the 1930s, and Activity theory in use since the 1960s.
Historical overview
Origins: revolutionary Russia
CHAT traces its lineage to dialectical materialism, classical German philosophy, and the work of Lev Vygotsky, Aleksei N. Leontiev and Aleksandr Luria, known as "the founding troika" of the cultural-historical approach to Social Psychology. In particular Goethe's romantic science ideas which were later taken up by Hegel. The conceptual meaning of "activity" is rooted in the German word Tätigkeit. Hegel is considered the first philosopher to point out that the development of humans' knowledge is not spiritually given, but developed in history from living and working in natural environments. In a radical departure from the behaviorism and reflexology that dominated much of psychology in the early 1920s, they formulated, in the spirit of Karl Marx's Theses on Feuerbach, the concept of activity, i.e., "artifact-mediated and object-oriented action". By bringing together the notion of history and culture in the understanding of human activity, they were able to transcend the Cartesian dualism between subject and object, internal and external, between people and society, between individual inner consciousness and the outer world of society. At the beginning of and into the mid-20th century, psychology was dominated by schools of thought that ignored real-life processes in psychological functioning (e.g. Gestalt psychology, Behaviorism and Cognitivism (psychology)). Lev Vygotsky, who developed the foundation of cultural-historical psychology based on the concept of mediation, published six books on psychology topics during a working life which spanned only ten years. He died of tuberculosis in 1934 at the age of 37. A.N. Leont'ev worked with Lev Vygotsky and Alexandr Luria from 1924 to 1930, collaborating on the development of a Marxist psychology. Leontiev left Vygotsky's group in Moscow in 1931, to take up a position in Kharkov. There he was joined by local psychologists, including Pyotr Galperin and Pyotr Zinchenko. He continued to work with Vygotsky for some time but, eventually, there was a split, although they continued to communicate with one another on scientific matters. Leontiev returned to Moscow in 1934. Contrary to popular belief, Vygotsky's work was never banned in Stalinist Soviet Russia. In 1950 A.N. Leontiev became the Head of the Psychology Department at the Faculty of Philosophy of the Lomonosov Moscow State University (MGU). This department became an independent Faculty in 1966. He remained there until his death in 1979. Leontiev's formulation of activity theory, post 1962, had become the new "official" basis for Soviet psychology. In the two decades between a thaw in the suppression of scientific enquiry in Russia and the death of the Vygotsky's continuers, contact was made with the West.
Developments in the West
Michael Cole, a psychology post-graduate exchange student, arrived in Moscow in 1962 for a one-year stint of research under Alexandr Luria. He was one for the first Westerners to present Luria's and Vygotsky's ideas to an Anglo-Saxon public. This, and a steady flow of books translated from Russian ensured the gradual establishment of a Cultural Psychology base in the west. The earliest books translated into English were Lev Vygotsky's "Thought and Language" (1962), Luria's "Cognitive Development" (1976), Leontiev's Activity, Consciousness, and Personality (1978) and Wertsch's "The Concept of Activity in Soviet Psychology" (1981). Principal among the groups promoting CHAT-related research was Yjrö Engeström's Helsinki-based CRADLE. In 1982, Yrjö Engeström organized an Activity Conference to concentrate on teaching and learning issues. This was followed by the Aarhus (Dk) Conference in 1983 and the Utrecht (Nl) conference in 1984. In October 1986, West Berlin's College of Arts hosted the first ISCAR International Congress on Activity Theory. The second ISCRAT congress took place in 1990. In 1992, ISCRAT became a formal legal organization with its own by-laws in Amsterdam. Other ISCRAT conferences: Rome (1993), Moscow (1995), Aarhus (1998) and Amsterdam (2002), when ISCRAT and the Conference for Socio-Cultural Research merged into ISCAR. From here on, ISCAR organizes an international Congress every three years: Sevilla (Es) 2005; San Diego (USA) 2008; Rome (It) 2011; Sydney (Au) 2014; Quebec, Canada (2017).
In recent years, the implications of activity theory in organizational development have been the focus of researchers at the Centre for Activity Theory and Developmental Work Research (CATDWR), now known as CRADLE, at the University of Helsinki, as well as Mike Cole at the Laboratory of Comparative Human Cognition (LCHC) at the University of California San Diego.
Three generations of activity theory
Diverse philosophical and psychological sources inform activity theory. In subsequent years, a simplified picture emerged, namely the idea that there are three principal 'stages' or 'generations' of activity theory, or "cultural-historical activity theory (CHAT). 'Generations' do not imply a 'better-worse' value judgment. Each generation illustrates a different aspect. Whilst the first generation built on Vygotsky's notion of mediated action from the individual's perspective, the second generation built on Leont'ev's notion of activity system, with emphasis on the collective. The third generation, which appeared in the mid-nineties, builds on the idea of multiple interacting activity systems focused on a partially shared object, with boundary-crossings between them. An activity system is a collective in which one or more human actors engage in activity to cyclically transform an object (a raw material or problem) to repeatedly achieve a desired result.
First generation – Vygotsky
The first generation emerges from Vygotsky's theory of cultural mediation, which was a response to behaviorism's explanation of consciousness, or the development of the human mind, by reducing the human "mind" to atomic components or structures associated with "stimulus – response" (S-R) processes. Vygotsky argued that the relationship between a human subject and an object is never direct but must be sought in society and culture because they evolve historically, rather than evolving in the human brain or individual mind unto itself. Vygotsky saw the past and present as fused within the individual, that the "present is seen in the light of history." His cultural-historical psychology attempted to account for the social origins of language and thinking. To Vygotsky, consciousness emerges from human activity mediated by artifacts (tools) and signs. These artifacts, which can be physical tools such as hammers, ovens, or computers; cultural artifacts, including language; or theoretical artifacts, like algebra or feminist theory, are created and/or transformed in the course of activity, which, in the first generation framework, happens at the individual level. Semiotic mediation is embodied in Vygotsky's triangular model which features the subject (S), object (O), and mediating artifact. Vygotsky's triangular representation of mediated action attempts to explain human consciousness development in a manner that did not rely on dualistic stimulus–response (S-R) associations. In mediated action the subject, object, and artifact stand in dialectical relationship whereby each affects the other and the activity as a whole. Vygotsky argued that the use of signs leads to a specific structure of human behavior, which allows the creation of new forms of culturally-based psychological processes – hence the importance of a cultural-historical context. Individuals could no longer be understood without their cultural environment, nor society without the agency of the individuals who use and produce these artifacts. The objects became cultural entities, and action that was oriented towards the objects became key to understanding the human psyche. In the Vygotskyan framework, the unit of analysis is the individual. First-generation activity theory has been used to understand individual behavior by examining the ways in which a person's objectivized actions are culturally mediated. Mediation is a key theoretical idea behind activity: People don't simply use tools and symbol systems; instead, everyday lived experiences are significantly mediated and intermediated by use of tools and symbols systems. Therefore, activity theory helps frame our understanding of such mediation. There is a strong focus on material and symbolic mediation, as well as internalization of external (social, societal, and cultural) forms of mediation. In Vygotskyan psychology, internalization is a theoretical concept that explains how individuals process what they learned through mediated action in the development of individual consciousness. Another important aspect of first generation CHAT is the concept of the zone of proximal development (ZPD) or "the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capable peers". ZPD is the theoretical range of what a performer can do with competent peers and assistance, as compared with what can be accomplished on one's own.
Second generation – Leontiev
While Vygotsky formulated practical human activity as the general explanatory category in human psychology, he did not fully clarify its nature. A.N. Leontiev developed the second generation of activity theory, which is a collective model. In Engeström's depiction of second-generation activity, the unit of analysis includes collective motivated activity toward an object, making room for understanding how collective action by social groups mediates activity. Leontiev theorized that activity resulted from the confluence of a human subject, the object of their activity as "the target or content of a thought or action" and the tools (including symbol systems) that mediate the object(ive). He saw activity as tripartite in structure, being composed of unconscious operations on/with tools, conscious but finite actions which are goal-directed, and higher level activities which are object-oriented and driven by motives. Hence, second generation activity theory included community, rules, division of labor and the importance of analyzing their interactions with each other. Rules may be explicit or implicit. Division of labor refers to the explicit and implicit organization of the community involved in the activity. Engeström described Vygotskian psychology as emphasizing the way semiotic and cultural systems mediate human action, whereas Leontiev's second-generation CHAT focused on the mediational effects of the systemic organization of human activity. In conceptualizing activity as only existing in relation to rules, community and division of labor, Engeström expanded the unit of analysis for studying human behavior from individual activity to a collective activity system. While the unit of analysis, for Vygotsky, is "individual activity" and, for Leontiev, the "collective activity system", for Jean Lave and others working around situated cognition the unit of analysis is "practice", "community of practice", and "participation". Other scholars analyze "the relationships between the individual's psychological development and the development of social systems". The activity system includes the social, psychological, cultural and institutional perspectives in the analysis. In this conceptualization, context or activity systems are inherently related to what Engeström argues are the deep-seated material practices and socioeconomic structures of a given culture. These societal dimensions had not been taken sufficiently into account by Vygotsky's, earlier triadic model. In Leontiev's understanding, thought and cognition were understood as a part of social life – as a part of the means of production and systems of social relations on one hand, and the intentions of individuals in certain social conditions on the other. In the second generation diagram, activity is positioned in the middle, mediation at the top, adding rules, community and division of labor at the bottom. The minimum components of an activity system are: the subject; the object; outcome; mediating instruments/tools/artifacts; rules and signs; community and division of labor.
In his example of the 'primeval collective hunt', Leontiev clarifies the difference between an individual action ("the beater frightening game") and a collective activity ("the hunt"). While individuals' actions (frightening game) are different from the overall goal of the activity (hunt), they share the same motive (obtaining food). Operations, on the other hand, are driven by the conditions and tools at hand, i.e. the objective circumstances under which the hunt is taking place. To understand the separate actions of the individuals, one needs to understand the broader motive behind the activity as a whole. This accounts for the three hierarchical levels of human functioning: object-related motives drive the collective activity (top); goals drive individual/group action(s) (middle); conditions and tools drive automated operations (lower level).
Third generation – Engeström et al.
After Vygotsky's foundational work on individuals' higher psychological functions and Leontiev's extension of these insights to collective activity systems, questions of diversity and dialogue between different traditions or perspectives became increasingly serious challenges. The work of Michael Cole and Yrjö Engeström in the 1970s and 1980s brought activity theory to a much wider audience of scholars in Scandinavia and North America. Once the lives and biographies of all the participants and the history of the wider community are taken into account, multiple activity systems needed to be considered, positing, according to Engeström, the need for a "third generation" to "develop conceptual tools to understand dialogue, multiple perspectives, and networks of interacting activity Systems". This larger canvas of active individuals (and researchers) embedded in organizational, political, and discursive practices constitutes a tangible advantage of second- and third-generation CHAT over its earlier Vygotskian ancestor, which focused on mediated action in relative isolation. Third-generation activity theory is the application of Activity Systems Analysis (ASA) in developmental research where investigators take a participatory and interventionist role in the participants' activities and change their experiences. Engeström's basic activity triangle (which adds rules/norms, intersubjective community relations, and division of labor, as well as multiple activity systems sharing an object) has become the principal third-generation model for analysing individuals and groups. Engeström summarizes the current state of CHAT with five principles:
The activity system as primary unit of analysis: the basic third-generation model includes minimally two interacting activity systems.
Multi-voicedness: an activity system is always a community of multiple points of views, traditions and interests.
Historicity: activity systems take shape and get transformed over long stretches of time. Potentials and problems can only be understood against the background of their own histories.
The central role of contradictions as sources of change and development.
Activity systems' possibility for expansive transformation (cycles of qualitative transformation): when object and motive are reconceptualized a radically wider horizon opens up.
Learning technologists have used third-generation CHAT as a guiding theoretical framework to understand how technologies are adopted, adapted, and configured through use in complex social situations. Engeström has acknowledged that the third-generation model was limited to analysing 'reasonably well-bounded' systems and that in view of new, often web-based participatory practices. a Fourth generation was needed.
Informing research and practice
Leontiev and social development
From the 1960s onwards, starting in the global South, and independently from the mainstream European developmental line, Leontiev's core Objective Activity concept has been used in a Social Development context. In the Organization Workshop's Large Group Capacitation-method, objective/ized activity acts as the core causal principle which postulates that, in order to change the mind-set of (large groups of) individuals, we need to start with changes to their activity – and/or to the object that "suggests" their activity. In Leontievian vein, the Organization Workshop is about semiotically mediated activities through which (large groups of) participants learn how to manage themselves and the organizations they create to perform tasks that require complex division of labor.
CHAT-inspired research and practice since the 1980s
Over the last two decades, CHAT has offered a theoretical lens informing research and practice, in that it posits that learning takes place through collective activities that are purposefully conducted around a common object. Starting from the premise that learning is a social and cultural process that draws on historical achievements, its systems thinking-based perspectives allow insights into the real world.
Change Laboratory (CL)
Change Laboratory (CL) is a CHAT-based method for formative intervention in activity systems and for research on their developmental potential as well as processes of expansive learning, collaborative concept-formation, and transformation of practices, elaborated in the mid-nineties by the Finnish Developmental Work Research (DWR) group, which became CRADLE in 2008. The CL method relies on collaboration between practitioners of the activity being analyzed and transformed, and academic researchers or interventionists supporting and facilitating collective developmental processes. Engeström developed a theory of expansive learning, which "begins with individual subjects questioning accepted practices, and it gradually expands into a collective movement or institution. The theory enables a "longitudinal and rich analysis of inter-organizational learning by using observational as well as interventionist designs in studies of work and organization". From this, the foundation of an interventionist research approach at DWR was elaborated in the 1980s, and developed further in the 1990s as an intervention method now known as Change Laboratory. CL interventions are used both to study the conditions of change and to help those working in organizations to develop their work, drawing on participant observation, interviews, and the recording and videotaping of meetings and work practices. Initially, with the help of an external interventionist, the first stimulus that is beyond the actors' present capabilities, is produced in the Change Laboratory by collecting first-hand empirical data on problematic aspects of the activity. This data may comprise difficult client cases, descriptions of recurrent disturbances and ruptures in the process of producing the outcome. Steps in the CL process: Step 1 Questioning; Step 2 Analysis; Step 3 Modeling; Step 4 Examining; Step 5 Implementing; Step 6 Reflecting; Step 7 Consolidating. These seven action steps for increased understanding are described by Engeström as expansive learning, or phases of an outwardly expanding spiral, while multiple kinds of actions can take place at any time. The phases of the model simply allow for the identification and analysis of the dominant action type during a particular period of time. These learning actions are provoked by contradictions. Contradictions are not simply conflicts or problems, but are "historically accumulating structural tensions within and between activity systems". CL is used by a team or work unit or by collaborating partners across the organizational boundaries, initially with the help of an interventionist-researcher. The CL method has been used in agricultural contexts, educational and media settings, health care and learning support.
Activity systems analysis (ASA)
Activity systems analysis is a CHAT-based method that uses Activity Theory concepts such as mediated action, goal-directed activity and dialectical relationship between the individual and environment for understanding human activity in real-world situations with data collection, analysis, and presentation methods that address the complexities of human activity in natural settings aimed to advance both theory and practice. It is based on Vygotsky's concept of mediated action and captures human activity in a triangle model that includes the subject, tool, object, rule, community, and division of labor. Subjects are participants in an activity, motivated toward a purpose or attainment of the object. The object can be the goal of an activity, the subject's motives for participating in an activity, and the material products that subjects gain through an activity. Tools are socially shared cognitive and material resources that subjects can use to attain the object. Informal or formal rules regulate the subject's participation while engaging in an activity. The community is the group or organization to which subjects belong. The division of labor is the shared participation responsibilities in the activity determined by the community. Finally, the outcome is the consequences that the subject faces due to actions driven by the object. These outcomes can encourage or hinder the subject's participation in future activities. In Part 2 of her video "Using Activity Theory to understand human behaviour", shows how activity theory is applied to the problem of behavior change and HIV and AIDs (in South Africa). The video focuses on sexual activity as the activity of the system and illustrates how an activity system analysis, through a historical and current account of the activity, provides a way of understanding the lack of behavior change in response to HIV and AIDS. The book Activity Systems Analysis Methods describes seven ASA case studies which fall "into four distinct work clusters. These clusters include works that help (a) understand developmental work research (DWR), (b) describe real-world learning situations, (c) design human-computer interaction systems, and (d) plan solutions to complicated work-based problems". Other uses of ASA include summarizing organizational change; identifying guidelines for designing constructivist learning environments; identifying contradictions and tensions that shape developments in educational settings; demonstrating historical developments in organizational learning, and evaluating K–12 school and university partnership relations.
Human–computer interaction (HCI)
When human-computer interaction (HCI) first appeared as a separate field of study in the early 1980s, HCI adopted the information processing paradigm of computer science as the model for human cognition, predicated on prevalent cognitive psychology criteria, which did not account for individuals' interests, needs and frustrations involved, nor that the technology depends on the social and dynamic contexts in which it takes place. Adopting a CHAT theoretical perspective carries implications for understanding how people use interactive technologies: for example, a computer is typically an object of activity rather than a mediating artefact means that people interact with the world through computers, rather than with computer 'objects'. Since the 1980s, a number of diverse methodologies outlining techniques for human–computer interaction design have emerged. Most design methodologies stem from a model for how users, designers, and technical systems interact.
Systemic-structural activity theory (SSAT)
SSAT builds on the general theory of activity to provide an effective basis for both experimental and analytic methods of studying human performance, using developed units of analysis. SSAT approaches cognition both as a process and as a structured system of actions or other functional information-processing units, developing a taxonomy of human activity through the use of structurally organized units of analysis. The systemic-structural approach to activity design and analysis involves identifying the available means of work, tools and objects; their relationship with possible strategies of work activity; existing constraints on activity performance; social norms and rules; possible stages of object transformation; and changes in the structure of activity during skills acquisition.This method is demonstrated by applying it to the study of a human–computer interaction task.
Future
Evolving field of study
CHAT offers a philosophical and cross-disciplinary perspective for analyzing human practices as development processes in which both individual and social levels are interlinked, as well as interactions and boundary-crossings between activity systems. Crossing boundaries involves "encountering difference, entering into unfamiliar territory, requiring cognitive retooling". More recently, the focus of studies of organizational learning has increasingly shifted away from learning within single organizations or organizational units, towards learning in multi‐organizational or inter‐organizational networks, as well as to the exploration of interactions in their social contexts, multiple contexts and cultures, and the dynamics and development of particular activities. This shift has generated such concepts as "networks of learning", "networked learning", coworking, and knotworking. Industry has seen growth in nonemployer firms (NEFs) due to changes in long-term employment trends and developments in mobile technology which have led to more work from remote locations, more distance collaboration, and more work organized around temporary projects. Developments such as these and new forms of social production or commons-based peer production like open source software development and cultural production in peer-to-peer (P2P) networks have become a key focus in Engeström's work. Social production processes are simultaneous, multi-directional and often reciprocal. The density and complexity of these processes blur distinctions between process and structure. The object of the activity is unstable, resists control and standardization, and requires rapid integration of expertise from various locations and traditions.
"Fourth generation"
The rapid rise of new forms of activities characterised by web-based social and participatory practices phenomena such as distributed workforce and the dominance of knowledge work, prompts a rethink of the third-generation model, bringing a need for a fourth generation activity system model. Fourth-Generation (4GAT) analysis should allow better examination of how activity networks interact, interpenetrate, and contradict each other. People "working alone together" may illuminate other examples of distributed, interorganizational, collaborative knowledge work. In fourth generation CHAT, the object(ive) will typically comprise multiple perspectives and contexts and be inherently transient; collaborations between actors are likely to be temporary, with multiple boundary crossings between interrelated activities. Fourth-generation activity theorists have specifically developed activity theory to better accommodate Castells's (and others') insights into how work organization has shifted in the network society. Hence, they will focus less on the workings of individual activity systems (often represented by triangles) and more on the interactions across activity systems functioning in networks.
See also
Activity theory
Aleksei N. Leontiev
Bonnie Nardi
Community of practice
Cultural-historical psychology
Kharkov School of Psychology
Knowledge sharing
Large-group capacitation
Legitimate peripheral participation
Lev Vygotsky
Organizational learning
Organization workshop
Social constructivism (learning theory)
Vygotsky Circle
Zone of proximal development
References
Publications
ISSN 0304-615X
– Chapter 25 in: Yogesh K. Dwivedi, Y.K. Lal, B., Williams, M., Schneberger, S.L., Wade, M., 2009, Handbook of Research on Contemporary Theoretical Models in Information Systems, IGI Global, 2009,
External links
Blunden, A. The Origins of CHAT.
Blunden, A. Concepts of CHAT Action, Behaviour and Consciousness (ppts).
Boardman, D. Activity Theory
Interview with Professor Yrjö Engeström: part 1
Interview with Professor Yrjö Engeström: part 2
Introduction to Cultural Historical Activity Theory (CHAT) Nygård
Leontiev works in English
Robertson, I. An Introduction to Activity Theory
Spinuzzi, Clay "All Edge: Understanding the New Workplace Networks" (Powerpoint Presentation)
The Future of Activity Theory
van der Riet Part I Introduction to Cultural Historical Activity Theory (CHAT)
van der Riet Part II Using Activity to Understand Human Behaviour
Vygotsky archive
Yamagata "Activity Systems Analysis in Design Research". (Powerpoint Presentation)
What is Activity Theory?
Adult education
Cognitive psychology
Learning methods
Social change
Training | 0.772928 | 0.984227 | 0.760737 |
Agent-based model | An agent-based model (ABM) is a computational model for simulating the actions and interactions of autonomous agents (both individual or collective entities such as organizations or groups) in order to understand the behavior of a system and what governs its outcomes. It combines elements of game theory, complex systems, emergence, computational sociology, multi-agent systems, and evolutionary programming. Monte Carlo methods are used to understand the stochasticity of these models. Particularly within ecology, ABMs are also called individual-based models (IBMs). A review of recent literature on individual-based models, agent-based models, and multiagent systems shows that ABMs are used in many scientific domains including biology, ecology and social science. Agent-based modeling is related to, but distinct from, the concept of multi-agent systems or multi-agent simulation in that the goal of ABM is to search for explanatory insight into the collective behavior of agents obeying simple rules, typically in natural systems, rather than in designing agents or solving specific practical or engineering problems.
Agent-based models are a kind of microscale model that simulate the simultaneous operations and interactions of multiple agents in an attempt to re-create and predict the appearance of complex phenomena. The process is one of emergence, which some express as "the whole is greater than the sum of its parts". In other words, higher-level system properties emerge from the interactions of lower-level subsystems. Or, macro-scale state changes emerge from micro-scale agent behaviors. Or, simple behaviors (meaning rules followed by agents) generate complex behaviors (meaning state changes at the whole system level).
Individual agents are typically characterized as boundedly rational, presumed to be acting in what they perceive as their own interests, such as reproduction, economic benefit, or social status, using heuristics or simple decision-making rules. ABM agents may experience "learning", adaptation, and reproduction.
Most agent-based models are composed of: (1) numerous agents specified at various scales (typically referred to as agent-granularity); (2) decision-making heuristics; (3) learning rules or adaptive processes; (4) an interaction topology; and (5) an environment. ABMs are typically implemented as computer simulations, either as custom software, or via ABM toolkits, and this software can be then used to test how changes in individual behaviors will affect the system's emerging overall behavior.
History
The idea of agent-based modeling was developed as a relatively simple concept in the late 1940s. Since it requires computation-intensive procedures, it did not become widespread until the 1990s.
Early developments
The history of the agent-based model can be traced back to the Von Neumann machine, a theoretical machine capable of reproduction. The device von Neumann proposed would follow precisely detailed instructions to fashion a copy of itself. The concept was then built upon by von Neumann's friend Stanislaw Ulam, also a mathematician; Ulam suggested that the machine be built on paper, as a collection of cells on a grid. The idea intrigued von Neumann, who drew it up—creating the first of the devices later termed cellular automata.
Another advance was introduced by the mathematician John Conway. He constructed the well-known Game of Life. Unlike von Neumann's machine, Conway's Game of Life operated by simple rules in a virtual world in the form of a 2-dimensional checkerboard.
The Simula programming language, developed in the mid 1960s and widely implemented by the early 1970s, was the first framework for automating step-by-step agent simulations.
1970s and 1980s: the first models
One of the earliest agent-based models in concept was Thomas Schelling's segregation model, which was discussed in his paper "Dynamic Models of Segregation" in 1971. Though Schelling originally used coins and graph paper rather than computers, his models embodied the basic concept of agent-based models as autonomous agents interacting in a shared environment with an observed aggregate, emergent outcome.
In the late 1970s, Paulien Hogeweg and Bruce Hesper began experimenting with individual models of ecology. One of their first results was to show that the social structure of bumble-bee colonies emerged as a result of simple rules that govern the behaviour of individual bees.
They introduced the ToDo principle, referring to the way agents "do what there is to do" at any given time.
In the early 1980s, Robert Axelrod hosted a tournament of Prisoner's Dilemma strategies and had them interact in an agent-based manner to determine a winner. Axelrod would go on to develop many other agent-based models in the field of political science that examine phenomena from ethnocentrism to the dissemination of culture.
By the late 1980s, Craig Reynolds' work on flocking models contributed to the development of some of the first biological agent-based models that contained social characteristics. He tried to model the reality of lively biological agents, known as artificial life, a term coined by Christopher Langton.
The first use of the word "agent" and a definition as it is currently used today is hard to track down. One candidate appears to be John Holland and John H. Miller's 1991 paper "Artificial Adaptive Agents in Economic Theory", based on an earlier conference presentation of theirs. A stronger and earlier candidate is Allan Newell, who in the first Presidential Address of AAAI (published as The Knowledge Level) discussed intelligent agents as a concept.
At the same time, during the 1980s, social scientists, mathematicians, operations researchers, and a scattering of people from other disciplines developed Computational and Mathematical Organization Theory (CMOT). This field grew as a special interest group of The Institute of Management Sciences (TIMS) and its sister society, the Operations Research Society of America (ORSA).
1990s: expansion
The 1990s were especially notable for the expansion of ABM within the social sciences, one notable effort was the large-scale ABM, Sugarscape, developed by
Joshua M. Epstein and Robert Axtell to simulate and explore the role of social phenomena such as seasonal migrations, pollution, sexual reproduction, combat, and transmission of disease and even culture. Other notable 1990s developments included Carnegie Mellon University's Kathleen Carley ABM, to explore the co-evolution of social networks and culture. The Santa Fe Institute (SFI) was important in encouraging the development of the ABM modeling platform Swarm under the leadership of Christopher Langton. Research conducted through SFI allowed the expansion of ABM techniques to a number of fields including study of the social and spatial dynamics of small-scale human societies and primates. During this 1990s timeframe Nigel Gilbert published the first textbook on Social Simulation: Simulation for the social scientist (1999) and established a journal from the perspective of social sciences: the Journal of Artificial Societies and Social Simulation (JASSS). Other than JASSS, agent-based models of any discipline are within scope of SpringerOpen journal Complex Adaptive Systems Modeling (CASM).
Through the mid-1990s, the social sciences thread of ABM began to focus on such issues as designing effective teams, understanding the communication required for organizational effectiveness, and the behavior of social networks. CMOT—later renamed Computational Analysis of Social and Organizational Systems (CASOS)—incorporated more and more agent-based modeling. Samuelson (2000) is a good brief overview of the early history, and Samuelson (2005) and Samuelson and Macal (2006) trace the more recent developments.
In the late 1990s, the merger of TIMS and ORSA to form INFORMS, and the move by INFORMS from two meetings each year to one, helped to spur the CMOT group to form a separate society, the North American Association for Computational Social and Organizational Sciences (NAACSOS). Kathleen Carley was a major contributor, especially to models of social networks, obtaining National Science Foundation funding for the annual conference and serving as the first President of NAACSOS. She was succeeded by David Sallach of the University of Chicago and Argonne National Laboratory, and then by Michael Prietula of Emory University. At about the same time NAACSOS began, the European Social Simulation Association (ESSA) and the Pacific Asian Association for Agent-Based Approach in Social Systems Science (PAAA), counterparts of NAACSOS, were organized. As of 2013, these three organizations collaborate internationally. The First World Congress on Social Simulation was held under their joint sponsorship in Kyoto, Japan, in August 2006. The Second World Congress was held in the northern Virginia suburbs of Washington, D.C., in July 2008, with George Mason University taking the lead role in local arrangements.
2000s and later
More recently, Ron Sun developed methods for basing agent-based simulation on models of human cognition, known as cognitive social simulation. Bill McKelvey, Suzanne Lohmann, Dario Nardi, Dwight Read and others at UCLA have also made significant contributions in organizational behavior and decision-making. Since 1991, UCLA has arranged a conference at Lake Arrowhead, California, that has become another major gathering point for practitioners in this field.
Theory
Most computational modeling research describes systems in equilibrium or as moving between equilibria. Agent-based modeling, however, using simple rules, can result in different sorts of complex and interesting behavior. The three ideas central to agent-based models are agents as objects, emergence, and complexity.
Agent-based models consist of dynamically interacting rule-based agents. The systems within which they interact can create real-world-like complexity. Typically agents are
situated in space and time and reside in networks or in lattice-like neighborhoods. The location of the agents and their responsive behavior are encoded in algorithmic form in computer programs. In some cases, though not always, the agents may be considered as intelligent and purposeful. In ecological ABM (often referred to as "individual-based models" in ecology), agents may, for example, be trees in a forest, and would not be considered intelligent, although they may be "purposeful" in the sense of optimizing access to a resource (such as water).
The modeling process is best described as inductive. The modeler makes those assumptions thought most relevant to the situation at hand and then watches phenomena emerge from the agents' interactions. Sometimes that result is an equilibrium. Sometimes it is an emergent pattern. Sometimes, however, it is an unintelligible mangle.
In some ways, agent-based models complement traditional analytic methods. Where analytic methods enable humans to characterize the equilibria of a system, agent-based models allow the possibility of generating those equilibria. This generative contribution may be the most mainstream of the potential benefits of agent-based modeling. Agent-based models can explain the emergence of higher-order patterns—network structures of terrorist organizations and the Internet, power-law distributions in the sizes of traffic jams, wars, and stock-market crashes, and social segregation that persists despite populations of tolerant people. Agent-based models also can be used to identify lever points, defined as moments in time in which interventions have extreme consequences, and to distinguish among types of path dependency.
Rather than focusing on stable states, many models consider a system's robustness—the ways that complex systems adapt to internal and external pressures so as to maintain their functionalities. The task of harnessing that complexity requires consideration of the agents themselves—their diversity, connectedness, and level of interactions.
Framework
Recent work on the Modeling and simulation of Complex Adaptive Systems has demonstrated the need for combining agent-based and complex network based models. describe a framework consisting of four levels of developing models of complex adaptive systems described using several example multidisciplinary case studies:
Complex Network Modeling Level for developing models using interaction data of various system components.
Exploratory Agent-based Modeling Level for developing agent-based models for assessing the feasibility of further research. This can e.g. be useful for developing proof-of-concept models such as for funding applications without requiring an extensive learning curve for the researchers.
Descriptive Agent-based Modeling (DREAM) for developing descriptions of agent-based models by means of using templates and complex network-based models. Building DREAM models allows model comparison across scientific disciplines.
Validated agent-based modeling using Virtual Overlay Multiagent system (VOMAS) for the development of verified and validated models in a formal manner.
Other methods of describing agent-based models include code templates and text-based methods such as the ODD (Overview, Design concepts, and Design Details) protocol.
The role of the environment where agents live, both macro and micro, is also becoming an important factor in agent-based modelling and simulation work. Simple environment affords simple agents, but complex environments generate diversity of behavior.
Multi-scale modelling
One strength of agent-based modelling is its ability to mediate information flow between scales. When additional details about an agent are needed, a researcher can integrate it with models describing the extra details. When one is interested in the emergent behaviours demonstrated by the agent population, they can combine the agent-based model with a continuum model describing population dynamics. For example, in a study about CD4+ T cells (a key cell type in the adaptive immune system), the researchers modelled biological phenomena occurring at different spatial (intracellular, cellular, and systemic), temporal, and organizational scales (signal transduction, gene regulation, metabolism, cellular behaviors, and cytokine transport). In the resulting modular model, signal transduction and gene regulation are described by a logical model, metabolism by constraint-based models, cell population dynamics are described by an agent-based model, and systemic cytokine concentrations by ordinary differential equations. In this multi-scale model, the agent-based model occupies the central place and orchestrates every stream of information flow between scales.
Applications
In biology
Agent-based modeling has been used extensively in biology, including the analysis of the spread of epidemics, and the threat of biowarfare, biological applications including population dynamics, stochastic gene expression, plant-animal interactions, vegetation ecology, migratory ecology, landscape diversity, sociobiology, the growth and decline of ancient civilizations, evolution of ethnocentric behavior, forced displacement/migration, language choice dynamics, cognitive modeling, and biomedical applications including modeling 3D breast tissue formation/morphogenesis, the effects of ionizing radiation on mammary stem cell subpopulation dynamics, inflammation,
and the human immune system, and the evolution of foraging behaviors. Agent-based models have also been used for developing decision support systems such as for breast cancer. Agent-based models are increasingly being used to model pharmacological systems in early stage and pre-clinical research to aid in drug development and gain insights into biological systems that would not be possible a priori. Military applications have also been evaluated. Moreover, agent-based models have been recently employed to study molecular-level biological systems. Agent-based models have also been written to describe ecological processes at work in ancient systems, such as those in dinosaur environments and more recent ancient systems as well.
In epidemiology
Agent-based models now complement traditional compartmental models, the usual type of epidemiological models. ABMs have been shown to be superior to compartmental models in regard to the accuracy of predictions. Recently, ABMs such as CovidSim by epidemiologist Neil Ferguson, have been used to inform public health (nonpharmaceutical) interventions against the spread of SARS-CoV-2. Epidemiological ABMs have been criticized for simplifying and unrealistic assumptions. Still, they can be useful in informing decisions regarding mitigation and suppression measures in cases when ABMs are accurately calibrated. The ABMs for such simulations are mostly based on synthetic populations, since the data of the actual population is not always available.
In business, technology and network theory
Agent-based models have been used since the mid-1990s to solve a variety of business and technology problems. Examples of applications include marketing, organizational behaviour and cognition, team working, supply chain optimization and logistics, modeling of consumer behavior, including word of mouth, social network effects, distributed computing, workforce management, and portfolio management. They have also been used to analyze traffic congestion.
Recently, agent based modelling and simulation has been applied to various domains such as studying the impact of publication venues by researchers in the computer science domain (journals versus conferences). In addition, ABMs have been used to simulate information delivery in ambient assisted environments. A November 2016 article in arXiv analyzed an agent based simulation of posts spread in Facebook. In the domain of peer-to-peer, ad hoc and other self-organizing and complex networks, the usefulness of agent based modeling and simulation has been shown. The use of a computer science-based formal specification framework coupled with wireless sensor networks and an agent-based simulation has recently been demonstrated.
Agent based evolutionary search or algorithm is a new research topic for solving complex optimization problems.
In team science
In the realm of team science, agent-based modeling has been utilized to assess the effects of team members' characteristics and biases on team performance across various settings. By simulating interactions between agents—each representing individual team members with distinct traits and biases—this modeling approach enables researchers to explore how these factors collectively influence the dynamics and outcomes of team performance. Consequently, agent-based modeling provides a nuanced understanding of team science, facilitating a deeper exploration of the subtleties and variabilities inherent in team-based collaborations.
In economics and social sciences
Prior to, and in the wake of the 2008 financial crisis, interest has grown in ABMs as possible tools for economic analysis. ABMs do not assume the economy can achieve equilibrium and "representative agents" are replaced by agents with diverse, dynamic, and interdependent behavior including herding. ABMs take a "bottom-up" approach and can generate extremely complex and volatile simulated economies. ABMs can represent unstable systems with crashes and booms that develop out of non-linear (disproportionate) responses to proportionally small changes. A July 2010 article in The Economist looked at ABMs as alternatives to DSGE models. The journal Nature also encouraged agent-based modeling with an editorial that suggested ABMs can do a better job of representing financial markets and other economic complexities than standard models along with an essay by J. Doyne Farmer and Duncan Foley that argued ABMs could fulfill both the desires of Keynes to represent a complex economy and of Robert Lucas to construct models based on microfoundations. Farmer and Foley pointed to progress that has been made using ABMs to model parts of an economy, but argued for the creation of a very large model that incorporates low level models. By modeling a complex system of analysts based on three distinct behavioral profiles – imitating, anti-imitating, and indifferent – financial markets were simulated to high accuracy. Results showed a correlation between network morphology and the stock market index. However, the ABM approach has been criticized for its lack of robustness between models, where similar models can yield very different results.
ABMs have been deployed in architecture and urban planning to evaluate design and to simulate pedestrian flow in the urban environment and the examination of public policy applications to land-use. There is also a growing field of socio-economic analysis of infrastructure investment impact using ABM's ability to discern systemic impacts upon a socio-economic network. Heterogeneity and dynamics can be easily built in ABM models to address wealth inequality and social mobility.
ABMs have also been proposed as applied educational tools for diplomats in the field of international relations and for domestic and international policymakers to enhance their evaluation of public policy.
ABM is also becoming increasingly popular in the field of energy systems analysis, particularly in the context of electricity market modelling. Notable examples of such models include AMIRIS, ASSUME, EMLab, and PowerACE, which facilitate the analysis of electricity markets in the context of the ongoing renewable energy transition.
In water management
ABMs have also been applied in water resources planning and management, particularly for exploring, simulating, and predicting the performance of infrastructure design and policy decisions, and in assessing the value of cooperation and information exchange in large water resources systems.
Organizational ABM: agent-directed simulation
The agent-directed simulation (ADS) metaphor distinguishes between two categories, namely "Systems for Agents" and "Agents for Systems." Systems for Agents (sometimes referred to as agents systems) are systems implementing agents for the use in engineering, human and social dynamics, military applications, and others. Agents for Systems are divided in two subcategories. Agent-supported systems deal with the use of agents as a support facility to enable computer assistance in problem solving or enhancing cognitive capabilities. Agent-based systems focus on the use of agents for the generation of model behavior in a system evaluation (system studies and analyses).
Self-driving cars
Hallerbach et al. discussed the application of agent-based approaches for the development and validation of automated driving systems via a digital twin of the vehicle-under-test and microscopic traffic simulation based on independent agents. Waymo has created a multi-agent simulation environment Carcraft to test algorithms for self-driving cars. It simulates traffic interactions between human drivers, pedestrians and automated vehicles. People's behavior is imitated by artificial agents based on data of real human behavior. The basic idea of using agent-based modeling to understand self-driving cars was discussed as early as 2003.
Implementation
Many ABM frameworks are designed for serial von-Neumann computer architectures, limiting the speed and scalability of implemented models. Since emergent behavior in large-scale ABMs is dependent of population size, scalability restrictions may hinder model validation. Such limitations have mainly been addressed using distributed computing, with frameworks such as Repast HPC specifically dedicated to these type of implementations. While such approaches map well to cluster and supercomputer architectures, issues related to communication and synchronization, as well as deployment complexity, remain potential obstacles for their widespread adoption.
A recent development is the use of data-parallel algorithms on Graphics Processing Units GPUs for ABM simulation. The extreme memory bandwidth combined with the sheer number crunching power of multi-processor GPUs has enabled simulation of millions of agents at tens of frames per second.
Integration with other modeling forms
Since Agent-Based Modeling is more of a modeling framework than a particular piece of software or platform, it has often been used in conjunction with other modeling forms. For instance, agent-based models have also been combined with Geographic Information Systems (GIS). This provides a useful combination where the ABM serves as a process model and the GIS system can provide a model of pattern. Similarly, Social Network Analysis (SNA) tools and agent-based models are sometimes integrated, where the ABM is used to simulate the dynamics on the network while the SNA tool models and analyzes the network of interactions. Tools like GAMA provide a natural way to integrate system dynamics and GIS with ABM.
Verification and validation
Verification and validation (V&V) of simulation models is extremely important. Verification involves making sure the implemented model matches the conceptual model, whereas validation ensures that the implemented model has some relationship to the real-world. Face validation, sensitivity analysis, calibration, and statistical validation are different aspects of validation. A discrete-event simulation framework approach for the validation of agent-based systems has been proposed. A comprehensive resource on empirical validation of agent-based models can be found here.
As an example of V&V technique, consider VOMAS (virtual overlay multi-agent system), a software engineering based approach, where a virtual overlay multi-agent system is developed alongside the agent-based model. Muazi et al. also provide an example of using VOMAS for verification and validation of a forest fire simulation model. Another software engineering method, i.e. Test-Driven Development has been adapted to for agent-based model validation. This approach has another advantage that allows an automatic validation using unit test tools.
See also
Agent-based computational economics
Agent-based model in biology
Agent-based social simulation (ABSS)
Artificial society
Boids
Comparison of agent-based modeling software
Complex system
Complex adaptive system
Computational sociology
Conway's Game of Life
Dynamic network analysis
Emergence
Evolutionary algorithm
Flocking
Internet bot
Kinetic exchange models of markets
Multi-agent system
Simulated reality
Social complexity
Social simulation
Sociophysics
Software agent
Swarming behaviour
Web-based simulation
TOTREP
References
General
first edition, 1999.
Available online.
External links
Articles/general information
Agent-based models of social networks, java applets.
On-Line Guide for Newcomers to Agent-Based Modeling in the Social Sciences
Introduction to Agent-based Modeling and Simulation. Argonne National Laboratory, November 29, 2006.
Agent-based models in Ecology – Using computer models as theoretical tools to analyze complex ecological systems
Network for Computational Modeling in the Social and Ecological Sciences' Agent Based Modeling FAQ
Multiagent Information Systems – Article on the convergence of SOA, BPM and Multi-Agent Technology in the domain of the Enterprise Information Systems. Jose Manuel Gomez Alvarez, Artificial Intelligence, Technical University of Madrid – 2006
Artificial Life Framework
Article providing methodology for moving real world human behaviors into a simulation model where agent behaviors are represented
Agent-based Modeling Resources, an information hub for modelers, methods, and philosophy for agent-based modeling
An Agent-Based Model of the Flash Crash of May 6, 2010, with Policy Implications, Tommi A. Vuorenmaa (Valo Research and Trading), Liang Wang (University of Helsinki - Department of Computer Science), October, 2013
Simulation models
Multi-agent Meeting Scheduling System Model by Qasim Siddique
Multi-firm market simulation by Valentino Piana
List of COVID-19 simulation models
Models of computation
Complex systems theory
Methods in sociology
Artificial life | 0.764623 | 0.994845 | 0.760682 |
Discontinuity (Postmodernism) | Discontinuity and continuity according to Michel Foucault reflect the flow of history and the fact that some "things are no longer perceived, described, expressed, characterised, classified, and known in the same way" from one era to the next. (1994).
Explanation
In developing the theory of archaeology of knowledge, Foucault was trying to analyse the fundamental codes which a culture uses to construct the episteme or configuration of knowledge that determines the empirical orders and social practices of each particular historical era. He adopted discontinuity as a positive working tool. Some of the discourse would be regular and continuous over time as knowledge steadily accumulates and society gradually establishes what will constitute truth or reason for the time being. But, in a transition from one era to the next, there will be overlaps, breaks and discontinuities as society reconfigures the discourse to match the new environment.
The tool is given an expanded role in genealogy, the next phase of discourse analysis, where the intention is to grasp the total complexity of the use of power and the effects it produces. Foucault sees power as the means for constituting individuals’ identities and determining the limits of their autonomy. This reflects the symbiotic relationship between power (pouvoir) and knowledge (savoir). In his study of prisons and hospitals, he observed how the modern individual becomes both an object and subject of knowledge. Science emerges as a means of directing and shaping lives. Hence, the modern conception of sexuality emerges from Christian codes of morality, the science of psychology, the laws and enforcement strategies adopted by the police and judiciary, the way in which issues of sexuality are discussed in the public media, the education system, etc. These are covert forms of domination (if not oppression), and their influence is to be found not only in what is said, but more importantly, in what is not said: in all the silences and lacunae, in all the discontinuities. If one idea is discussed, then it is not discussed, whose interest is served by this change?
References
Foucault, M. The Order of Things: An Archaeology of the Human Sciences. Vintage; Reissue edition (1964)
Postmodern theory
Post-structuralism
Social philosophy
Structuralism
Michel Foucault | 0.790098 | 0.962758 | 0.760673 |
Educational system | The educational system generally refers to the structure of all institutions and the opportunities for obtaining education within a country. It includes all pre-school institutions, starting from family education, and/or early childhood education, through kindergarten, primary, secondary, and tertiary schools, then lyceums, colleges, and faculties also known as Higher education (University education). This framework also includes institutions of continuous (further) professional and personal education, as well as private educational institutions.
While the education system is usually regulated and organized according to the relevant laws of a country, a country's education system may have unregulated aspects or dimensions. Typically, an education system is designed to provide education for all sections of a country's society and its members. It comprises everything that goes into educating the population.
The United Nations Educational, Scientific and Cultural Organization (UNESCO) recognises nine levels of education in its International Standard Classification of Education (ISCED) system (from Level 0 (pre-primary education) through Level 8 (doctoral)). UNESCO's International Bureau of Education maintains a database of country-specific education systems and their stages.
See also
Business School
System
Education
Educational stage
International Standard Classification of Education
Bibliography
Kallen, Denis (1996) Evaluating and reforming education systems. Paris, Organisation for Economic Co-operation and Development
Helmut Fend: Die sozialen und individuellen Funktionen von Bildungssystemen: Enkulturation, Qualifikation, Allokation und Integration. In: Hellekamps, S./Plöger, W./Wittenbruch, W. (Hrsg.): Handbuch der Erziehungswissenschaft. Bd. 3: Schule. Paderborn u. a. 2011, S. 41–53.
Průcha, Jan. (1999) Vzdělávání a školství ve světě. 1. vyd. Praha: Portál, 320 s. ISBN 80-7178-290-4. S. 16
Peeter Mehisto, Fred Genesee (2015) Building Bilingual Education Systems, Cambridge University Press,
Michael Belok, Krishna Gopal (1979) Educational Systems: Occidental and Oriental. Anu Prakashan
García Garrido, José Luis. (ed.) Diagnosis of the Educational System. Madrid, Instituto Nacional de Calidad y Evaluación (España) 2000
Sam Kaplan (2006) The Pedagogical State: Education and the Politics of National Culture in Post-1980 Turkiye, Stanford University Press
Clyde Chitty (1999) The education system transformed. Tisbury : Baseline Book
EURYDICE:European Unit (1988) The Greek education system, Brussels, Published for the Commission of the European Communities, Directorate-General [for] Employment, Social Affairs and Education
Musa Kraja (1998, 2006) Pedagogjia. Tiranë,
Further reading
del Río, Adrián; Knutsen, Carl Henrik; Lutscher, Philipp M. (2024). "Education Policies and Systems Across Modern History: A Global Dataset". Comparative Political Studies.
References
Education systems
Social systems | 0.76633 | 0.992598 | 0.760658 |
Thesis | A thesis (: theses), or dissertation (abbreviated diss.), is a document submitted in support of candidature for an academic degree or professional qualification presenting the author's research and findings. In some contexts, the word thesis or a cognate is used for part of a bachelor's or master's course, while dissertation is normally applied to a doctorate. This is the typical arrangement in American English. In other contexts, such as within most institutions of the United Kingdom and Republic of Ireland, the reverse is true. The term graduate thesis is sometimes used to refer to both master's theses and doctoral dissertations.
The required complexity or quality of research of a thesis or dissertation can vary by country, university, or program, and the required minimum study period may thus vary significantly in duration.
The word dissertation can at times be used to describe a treatise without relation to obtaining an academic degree. The term thesis is also used to refer to the general claim of an essay or similar work.
Etymology
The term thesis comes from the Greek word , meaning "something put forth", and refers to an intellectual proposition. Dissertation comes from the Latin dissertātiō, meaning "discussion". Aristotle was the first philosopher to define the term thesis. A 'thesis' is a supposition of some eminent philosopher that conflicts with the general opinion...for to take notice when any ordinary person expresses views contrary to men's usual opinions would be silly.For Aristotle, a thesis would therefore be a supposition that is stated in contradiction with general opinion or express disagreement with other philosophers (104b33-35). A supposition is a statement or opinion that may or may not be true depending on the evidence and/or proof that is offered (152b32). The purpose of the dissertation is thus to outline the proofs of why the author disagrees with other philosophers or the general opinion.
Structure and presentation style
Structure
A thesis (or dissertation) may be arranged as a thesis by publication or a monograph, with or without appended papers, respectively, though many graduate programs allow candidates to submit a curated collection of articles. An ordinary monograph has a title page, an abstract, a table of contents, comprising the various chapters like introduction, literature review, methodology, results, discussion, and bibliography or more usually a references section. They differ in their structure in accordance with the many different areas of study (arts, humanities, social sciences, technology, sciences, etc.) and the differences between them. In a thesis by publication, the chapters constitute an introductory and comprehensive review of the appended published and unpublished article documents.
Dissertations normally report on a research project or study, or an extended analysis of a topic. The structure of a thesis or dissertation explains the purpose, the previous research literature impinging on the topic of the study, the methods used, and the findings of the project. Most world universities use a multiple chapter format:
a) an introduction: which introduces the research topic, the methodology, as well as its scope and significance
b) a literature review: reviewing relevant literature and showing how this has informed the research issue
c) a methodology chapter, explaining how the research has been designed and why the research methods/population/data collection and analysis being used have been chosen
d) a findings chapter: outlining the findings of the research itself
e) an analysis and discussion chapter: analysing the findings and discussing them in the context of the literature review (this chapter is often divided into two—analysis and discussion)
f) a conclusion: which shows judgement or decision reached by thesis
Style
Degree-awarding institutions often define their own house style that candidates have to follow when preparing a thesis document. In addition to institution-specific house styles, there exist a number of field-specific, national, and international standards and recommendations for the presentation of theses, for instance ISO 7144. Other applicable international standards include ISO 2145 on section numbers, ISO 690 on bibliographic references, and ISO 31 or its revision ISO 80000 on quantities or units.
Some older house styles specify that front matter (title page, abstract, table of content, etc.) must use a separate page number sequence from the main text, using Roman numerals. The relevant international standard and many newer style guides recognize that this book design practice can cause confusion where electronic document viewers number all pages of a document continuously from the first page, independent of any printed page numbers. They, therefore, avoid the traditional separate number sequence for front matter and require a single sequence of Arabic numerals starting with 1 for the first printed page (the recto of the title page).
Presentation requirements, including pagination, layout, type and color of paper, use of acid-free paper (where a copy of the dissertation will become a permanent part of the library collection), paper size, order of components, and citation style, will be checked page by page by the accepting officer before the thesis is accepted and a receipt is issued.
However, strict standards are not always required. Most Italian universities, for example, have only general requirements on the character size and the page formatting, and leave much freedom for the actual typographic details.
Thesis committee
The thesis committee (or dissertation committee) is a committee that supervises a student's dissertation. In the US, these committees usually consist of a primary supervisor or advisor and two or more committee members, who supervise the progress of the dissertation and may also act as the examining committee, or jury, at the oral examination of the thesis (see ).
At most universities, the committee is chosen by the student in conjunction with their primary adviser, usually after completion of the comprehensive examinations or prospectus meeting, and may consist of members of the comps committee. The committee members are doctors in their field (whether a PhD or other designation) and have the task of reading the dissertation, making suggestions for changes and improvements, and sitting in on the defense. Sometimes, at least one member of the committee must be a professor in a department that is different from that of the student.
Role of thesis supervisor
The role of the thesis supervisor is to assist and support a student in their studies, and to determine whether a thesis is ready for examination. The thesis is authored by the student, not the supervisor. The duties of the thesis supervisor also include checking for copyright compliance and ensuring that the student has included in/with the thesis a statement attesting that he/she is the sole author of the thesis.
Regional and degree-specific practices and terminologies
Argentina
In the Latin American docta, the academic dissertation can be referred to as different stages inside the academic program that the student is seeking to achieve into a recognized Argentine University, in all the cases the students must develop original contribution in the chosen fields by means of several paper work and essays that comprehend the body of the thesis. Correspondingly to the academic degree, the last phase of an academic thesis is called in Spanish a defensa de grado, defensa magistral or defensa doctoral in cases in which the university candidate is finalizing their licentiate, master's, or PhD program, respectively. According to a committee resolution, the dissertation can be approved or rejected by an academic committee consisting of the thesis director and at least one evaluator. All the dissertation referees must already have achieved at least the academic degree that the candidate is trying to reach.
Canada
At English-speaking Canadian universities, writings presented in fulfillment of undergraduate coursework requirements are normally called papers, term papers or essays. A longer paper or essay presented for completion of a 4-year bachelor's degree is sometimes called a major paper. High-quality research papers presented as the empirical study of a "postgraduate" consecutive bachelor with Honours or Baccalaureatus Cum Honore degree are called thesis (Honours Seminar Thesis). Major papers presented as the final project for a master's degree are normally called thesis; and major papers presenting the student's research towards a doctoral degree are called theses or dissertations.
At French-language universities, for the fulfillment of a master's degree, students can present a "mémoire"' or a shorter "essai"' (the latter requires the student to take more courses). For the fulfillment of a doctoral degree, they may present a "thèse" or an "essai doctoral" (here too, the latter requires more courses). All these documents are usually synthetic monograph related to the student's research work.
A typical undergraduate paper or essay might be forty pages. Master's theses are approximately one hundred pages. PhD theses are usually over two hundred pages. This may vary greatly by discipline, program, college, or university. A study published in 2021 found that in Québec universities, between 2000 and 2020, master's and PhD theses averaged 127.4 and 245.6 pages respectively.
Theses Canada acquires and preserves a comprehensive collection of Canadian theses at Library and Archives Canada (LAC) through a partnership with Canadian universities who participate in the program. Most theses can also be found in the institutional repository of the university the student graduated from.
Croatia
At most university faculties in Croatia, a degree is obtained by defending a thesis after having passed all the classes specified in the degree programme. In the Bologna system, the bachelor's thesis, called završni rad (literally "final work" or "concluding work") is defended after 3 years of study and is about 30 pages long. Most students with bachelor's degrees continue onto master's programmes which end with a master's thesis called diplomski rad (literally "diploma work" or "graduate work"). The term dissertation is used for a doctoral degree paper (doktorska disertacija).
Czech Republic
In the Czech Republic, higher education is completed by passing all classes remaining to the educational compendium for given degree and defending a thesis. For bachelors programme the thesis is called bakalářská práce (bachelor's thesis), for master's degrees and also doctor of medicine or dentistry degrees it is the diplomová práce (master's thesis), and for Philosophiae doctor (PhD.) degree it is dissertation dizertační práce. Thesis for so called Higher-Professional School (Vyšší odborná škola, VOŠ) is called absolventská práce.
Finland
The following types of thesis are used in Finland (names in Finnish/Swedish):
Kandidaatintutkielma/kandidatavhandling is the dissertation associated with lower-level academic degrees (bachelor's degree), and at universities of applied science.
Pro gradu(-tutkielma)/(avhandling) pro gradu, colloquially referred to simply as 'gradu', now referred to as maisterintutkielma by many degree-awarding institutions is the dissertation for master's degrees, which make up the majority of degrees conferred in Finland, and this is therefore the most common type of thesis submitted in the country. The equivalent for engineering and architecture students is diplomityö/diplomarbete. At many Finnish universities, the 21st century has seen a substantial reduction in the requirements for this thesis level.
The highest-level theses are called lisensiaatintutkielma/licentiatavhandling and (tohtorin)väitöskirja/doktorsavhandling, for licentiate and doctoral degrees, respectively.
France
In France, the academic dissertation or thesis is called a thèse and it is reserved for the final work of doctoral candidates. The minimum page length is generally (and not formally) 100 pages (or about 400,000 characters), but is usually several times longer (except for technical theses and for "exact sciences" such as physics and maths).
To complete a master's degree in research, a student is required to write a mémoire, the French equivalent of a master's thesis in other higher education systems.
The word dissertation in French is reserved for shorter (1,000–2,000 words), more generic academic treatises.
The defense is called a soutenance.
Since 2023, at the end of the admission process, the doctoral student takes an oath of commitment to the principles of scientific integrity
Germany
In Germany, an academic thesis is called Abschlussarbeit or, more specifically, the basic name of the degree complemented by -arbeit (rough translation: -work; e.g., Diplomarbeit, Masterarbeit, Doktorarbeit). For bachelor's and master's degrees, the name can alternatively be complemented by -thesis instead (e.g., Bachelorthesis).
Length is often given in page count and depends upon departments, faculties, and fields of study. A bachelor's thesis is often 40–60 pages long, a diploma thesis and a master's thesis usually 60–100. The required submission for a doctorate is called a Dissertation or Doktorarbeit. The submission for a Habilitation, which is an academic qualification, not an academic degree, is called Habilitationsschrift, not Habilitationsarbeit.
A doctoral degree is often earned with multiple levels of a Latin honors remark for the thesis ranging from summa cum laude (best) to rite (duly). A thesis can also be rejected with a Latin remark (non-rite, non-sufficit or worst as sub omni canone). Bachelor's and master's theses receive numerical grades from 1.0 (best) to 5.0 (failed).
India
In India the thesis defense is called a (Latin for "by live voice") examination (viva in short). Involved in the viva are two examiners, one guide (student guide) and the candidate. One examiner is an academic from the candidate's own university department (but not one of the candidate's supervisors) and the other is an external examiner from a different university.
In India, PG Qualifications such as MSc Physics accompanies submission of dissertation in Part I and submission of a Project (a working model of an innovation) in Part II. Engineering and Designing qualifications such as BTech, B.E., B.Des, MTech, M.E. or M.Des also involves submission of dissertation. In all the cases, the dissertation can be extended for summer internship at certain research and development organizations or also as PhD synopsis.
Indonesia
In Indonesia, the term thesis is used specifically to refer to master's theses. The undergraduate thesis is called skripsi, while the doctoral dissertation is called disertasi. In general, those three terms are usually called as tugas akhir (final assignment), which is mostly mandatory for the completion of a degree. Undergraduate students usually begin to write their final assignment in their third, fourth or fifth enrollment year, depends on the requirements of their respective disciplines and universities. In some universities, students are required to write a proposal skripsi or proposal tesis (thesis proposal) before they could write their final assignment. If the thesis proposal is considered to fulfill the qualification by the academic examiners, students then may proceed to write their final assignment.
Iran
In Iran, usually students are required to present a thesis ( pāyān-nāmeh) in their master's degree and a dissertation ( resāleh) in their Doctorate degree, both of which requiring the students to defend their research before a committee and gaining their approval. Most of the norms and rules of writing a thesis or a dissertation are influenced by the French higher education system.
Italy
In Italy there are normally three types of thesis. In order of complexity: one for the Laurea (equivalent to the UK Bachelor's Degree), another one for the Laurea Magistrale (equivalent to the UK Master's Degree) and then a thesis to complete the Dottorato di Ricerca (PhD). Thesis requirements vary greatly between degrees and disciplines, ranging from as low as 3–4 ECTS credits to more than 30. Thesis work is mandatory for the completion of a degree.
Kazakhstan
In Kazakhstan, a bachelor's degree typically requires a bachelor's diploma work (kz "бакалаврдың дипломдық жұмысы"), while the master's and PhD degree require a master's/doctoral dissertation (kz "магистрлік/докторлық диссертация"). All the works are publicly presented to the special council at the end of the training, which thoroughly examines the work. PhD candidates may be allowed to present their work without a written thesis, if they provide enough publications in leading journals of the field, and one of which should be a review article specifically.
Malaysia
Malaysian universities often follow the British model for dissertations and degrees. However, a few universities follow the United States model for theses and dissertations. Some public universities have both British and US style PhD programs. Branch campuses of British, Australian and Middle East universities in Malaysia use the respective models of the home campuses.
Pakistan
In Pakistan, at undergraduate level the thesis is usually called final year project, as it is completed in the senior year of the degree, the name project usually implies that the work carried out is less extensive than a thesis and bears lesser credit hours too. The undergraduate level project is presented through an elaborate written report and a presentation to the advisor, a board of faculty members and students. At graduate level however, i.e. in MS, some universities allow students to accomplish a project of 6 credits or a thesis of 9 credits, at least one publication is normally considered enough for the awarding of the degree with project and is considered mandatory for the awarding of a degree with thesis. A written report and a public thesis defense is mandatory, in the presence of a board of senior researchers, consisting of members from an outside organization or a university. A PhD candidate is supposed to accomplish extensive research work to fulfill the dissertation requirements with international publications being a mandatory requirement. The defense of the research work is done publicly.
Philippines
In the Philippines, an academic thesis is named by the degree, such as bachelor/undergraduate thesis or masteral thesis. However, in Philippine English, the term doctorate is typically replaced with doctoral (as in the case of "doctoral dissertation"), though in official documentation the former is still used. The terms thesis and dissertation are commonly used interchangeably in everyday language yet it is generally understood that a thesis refers to bachelor/undergraduate and master academic work while a dissertation is named for doctorate work.
The Philippine system is influenced by American collegiate system, in that it requires a research project to be submitted before being allowed to write a thesis. This project is mostly given as a prerequisite writing course to the actual thesis and is accomplished in the term period before; supervision is provided by one professor assigned to a class. This project is later to be presented in front of an academic panel, often the entire faculty of an academic department, with their recommendations contributing to the acceptance, revision, or rejection of the initial topic. In addition, the presentation of the research project will help the candidate choose their primary thesis adviser.
An undergraduate thesis is completed in the final year of the degree alongside existing seminar (lecture) or laboratory courses, and is often divided into two presentations: proposal and thesis presentations (though this varies across universities), whereas a master thesis or doctorate dissertation is accomplished in the last term alone and is defended once. In most universities, a thesis is required for the bestowment of a degree to a candidate alongside a number of units earned throughout their academic period of stay, though for practice and skills-based degrees a practicum and a written report can be achieved instead. The examination board often consists of three to five examiners, often professors in a university (with a Masters or PhD degree) depending on the university's examination rules. Required word length, complexity, and contribution to scholarship varies widely across universities in the country.
Poland
In Poland, a bachelor's degree usually requires a praca licencjacka (bachelor's thesis) or the similar level degree in engineering requires a praca inżynierska (engineer's thesis/bachelor's thesis), the master's degree requires a praca magisterska (master's thesis). The academic dissertation for a PhD is called a dysertacja or praca doktorska. The submission for the Habilitation is called praca habilitacyjna or dysertacja habilitacyjna. Thus the term dysertacja is reserved for PhD and Habilitation degrees. All the theses need to be "defended" by the author during a special examination for the given degree. Examinations for PhD and Habilitation degrees are public.
Portugal and Brazil
In Portugal and Brazil, a dissertation (dissertação) is required for completion of a master's degree. The defense is done in a public presentation in which teachers, students, and the general public can participate. For the PhD, a thesis (tese) is presented for defense in a public exam. The exam typically extends over 3 hours. The examination board typically involves 5 to 6 scholars (including the advisor) or other experts with a PhD degree (generally at least half of them must be external to the university where the candidate defends the thesis, but it may depend on the university). Each university / faculty defines the length of these documents, and it can vary also in respect to the domains (a thesis in fields like philosophy, history, geography, etc., usually has more pages than a thesis in mathematics, computer science, statistics, etc.) but typical numbers of pages are around 60–80 for MSc and 150–250 for PhD.
In Brazil the Bachelor's Thesis is called TCC or Trabalho de Conclusão de Curso (Final Term / Undergraduate Thesis / Final Paper).
Russia, Belarus, Ukraine
In Russia, Belarus, and Ukraine an academic dissertation or thesis is called what can be literally translated as a "master's degree work" (thesis), whereas the word dissertation is reserved for doctoral theses (Candidate of Sciences). To complete both bachelor's and master's degree, a student is required to write a thesis and to then defend the work publicly. The length of this manuscript usually is given in page count and depends upon educational institution, its departments, faculties, and fields of study
Slovenia
At universities in Slovenia, an academic thesis called diploma thesis is a prerequisite for completing undergraduate studies. The thesis used to be 40–60 pages long, but has been reduced to 20–30 pages in new Bologna process programmes. To complete Master's studies, a candidate must write magistrsko delo (Master's thesis) that is longer and more detailed than the undergraduate thesis. The required submission for the doctorate is called doktorska disertacija (doctoral dissertation). In pre Bologna programmes students were able to skip the preparation and presentation of a Master's thesis and continue straightforward towards doctorate.
Slovakia
In Slovakia, higher education is completed by defending a thesis, which is called bachelor's thesis "bakalárska práca" for bachelors programme, master's thesis or "diplomová práca" for master's degrees, and also doctor of medicine or dentistry degrees and dissertation "dizertačná práca" for Philosophiae doctor (PhD.) degree.
Sweden
In Sweden, there are different types of theses. Practices and definitions vary between fields but commonly include the C thesis/Bachelor thesis, which corresponds to 15 HP or 10 weeks of independent studies, D thesis/'/Magister/one year master's thesis, which corresponds to 15 HP or 10 weeks of independent studies and E Thesis/two-year master's thesis, which corresponds to 30 HP or 20 weeks of independent studies. The undergraduate theses are called uppsats ("essay"), sometimes examensarbete, especially at technical programmes.
After that there are two types of post graduate theses: licentiate thesis (licentiatuppsats) and PhD dissertation (doktorsavhandling). A licentiate degree is approximately "half a PhD" in terms of the size and scope of the thesis. Swedish PhD studies should in theory last for four years, including course work and thesis work, but as many PhD students also teach, the PhD often takes longer to complete. The thesis can be written as a monograph or as a compilation thesis; in the latter case, the introductory chapters are called the kappa (literally "coat").
United Kingdom
Outside the academic community, the terms thesis and dissertation are interchangeable. At universities in the United Kingdom, the term thesis is usually associated with PhD/EngD (doctoral) and research master's degrees, while dissertation is the more common term for a substantial project submitted as part of a taught master's degree or an undergraduate degree (e.g. MSc, BA, BSc, BMus, BEd, BEng etc.).
Thesis word lengths may differ by faculty/department and are set by individual universities.
A wide range of supervisory arrangements can be found in the British academy, from single supervisors (more usual for undergraduate and Masters level work) to supervisory teams of up to three supervisors. In teams, there will often be a Director of Studies, usually someone with broader experience (perhaps having passed some threshold of successful supervisions). The Director may be involved with regular supervision along with the other supervisors, or may have more of an oversight role, with the other supervisors taking on the more day-to-day responsibilities of supervision.
United States
In some U.S. doctoral programs, the "dissertation" can take up the major part of the student's total time spent (along with two or three years of classes) and may take years of full-time work to complete. At most universities, dissertation is the term for the required submission for the doctorate, and thesis refers only to the master's degree requirement.
Thesis is also used to describe a cumulative project for a bachelor's degree and is more common at selective colleges and universities, or for those seeking admittance to graduate school or to obtain an honors academic designation. These projects are called "senior projects" or "senior theses"; they are generally done in the senior year near graduation after having completed other courses, the independent study period, and the internship or student teaching period (the completion of most of the requirements before the writing of the paper ensures adequate knowledge and aptitude for the challenge). Unlike a dissertation or master's thesis, they are not as long and they do not require a novel contribution to knowledge or even a very narrow focus on a set subtopic. Like them, they can be lengthy and require months of work, they require supervision by at least one professor adviser, they must be focused on a certain area of knowledge, and they must use an appreciable amount of scholarly citations. They may or may not be defended before a committee but usually are not; there is generally no preceding examination before the writing of the paper, except for at very few colleges. Because of the nature of the graduate thesis or dissertation having to be more narrow and more novel, the result of original research, these usually have a smaller proportion of the work that is cited from other sources, though the fact that they are lengthier may mean they still have more total citations.
Specific undergraduate courses, especially writing-intensive courses or courses taken by upperclassmen, may also require one or more extensive written assignments referred to variously as theses, essays, or papers. Increasingly, high schools are requiring students to complete a senior project or senior thesis on a chosen topic during the final year as a prerequisite for graduation. The extended essay component of the International Baccalaureate Diploma Programme, offered in a growing number of American high schools, is another example of this trend.
Generally speaking, a dissertation is judged as to whether it makes an original and unique contribution to scholarship. Lesser projects (a master's thesis, for example) are judged by whether they demonstrate mastery of available scholarship in the presentation of an idea.
The required complexity or quality of research of a thesis may vary significantly among universities or programs.
Thesis examinations
One of the requirements for certain advanced degrees is often an oral examination (called a viva voce examination or just viva in the UK and certain other English-speaking countries). This examination normally occurs after the dissertation is finished but before it is submitted to the university, and may comprise a presentation (often public) by the student and questions posed by an examining committee or jury. In North America, an initial oral examination in the field of specialization may take place just before the student settles down to work on the dissertation. An additional oral exam may take place after the dissertation is completed and is known as a thesis defense or dissertation defense, which at some universities may be a mere formality and at others may result in the student being required to make significant revisions.
Examination results
The result of the examination may be given immediately following deliberation by the examination committee (in which case the candidate may immediately be considered to have received their degree), or at a later date, in which case the examiners may prepare a defense report that is forwarded to a Board or Committee of Postgraduate Studies, which then officially recommends the candidate for the degree.
Potential decisions (or "verdicts") include:
Accepted/pass with no corrections.
The thesis is accepted as presented. A grade may be awarded, though in many countries PhDs are not graded at all, and in others, only one of the theoretically possible grades (the highest) is ever used in practice.
The thesis must be revised.
Revisions (for example, correction of numerous grammatical or spelling errors; clarification of concepts or methodology; an addition of sections) are required. One or more members of the jury or the thesis supervisor will make the decision on the acceptability of revisions and provide written confirmation that they have been satisfactorily completed. If, as is often the case, the needed revisions are relatively modest, the examiners may all sign the thesis with the verbal understanding that the candidate will review the revised thesis with their supervisor before submitting the completed version.
Extensive revision required.
The thesis must be revised extensively and undergo the evaluation and defense process again from the beginning with the same examiners. Problems may include theoretical or methodological issues. A candidate who is not recommended for the degree after the second defense must normally withdraw from the program.
Unacceptable.
The thesis is unacceptable and the candidate must withdraw from the program. This verdict is given only when the thesis requires major revisions and when the examination makes it clear that the candidate is incapable of making such revisions.
At most North American institutions the latter two verdicts are extremely rare, for two reasons. First, to obtain the status of doctoral candidates, graduate students typically pass a qualifying examination or comprehensive examination, which often includes an oral defense. Students who pass the qualifying examination are deemed capable of completing scholarly work independently and are allowed to proceed with working on a dissertation. Second, since the thesis supervisor (and the other members of the advisory committee) will normally have reviewed the thesis extensively before recommending the student to proceed to the defense, such an outcome would be regarded as a major failure not only on the part of the candidate but also by the candidate's supervisor (who should have recognized the substandard quality of the dissertation long before the defense was allowed to take place). It is also fairly rare for a thesis to be accepted without any revisions; the most common outcome of a defense is for the examiners to specify minor revisions (which the candidate typically completes in a few days or weeks).
At universities on the British pattern it is not uncommon for theses at the viva stage to be subject to major revisions in which a substantial rewrite is required, sometimes followed by a new viva. Very rarely, the thesis may be awarded the lesser degree of M.Phil. (Master of Philosophy) instead, preventing the candidate from resubmitting the thesis.
Australia
In Australia, doctoral theses are usually examined by three examiners (although some, like the Australian Catholic University, the University of New South Wales, and Western Sydney University have shifted to using only two examiners) without a live defense except in extremely rare exceptions. In the case of a master's degree by research, the thesis is usually examined by only two examiners. Typically, one of these examiners will be from within the candidate's own department; the other(s) will usually be from other universities and often from overseas. Following submission of the thesis, copies are sent by mail to examiners and then reports sent back to the institution.
Similar to a thesis for a master's degree by research, a thesis for the research component of a master's degree by coursework is also usually examined by two examiners, one from the candidate's department and one from another university. For an Honours year, which is a fourth year in addition to the usual three-year bachelor's degree, the thesis is also examined by two examiners, though both are usually from the candidate's own department. Honours and Master's theses sometimes require an oral defense before they are accepted.
Germany
In Germany, a thesis is usually examined with an oral examination. This applies to almost all Diplom, Magister, master's and doctoral degrees as well as to most bachelor's degrees. However, a process that allows for revisions of the thesis is usually only implemented for doctoral degrees.
There are several different kinds of oral examinations used in practice. The Disputation, also called Verteidigung ("defense"), is usually public (at least to members of the university) and is focused on the topic of the thesis. In contrast, the Rigorosum (oral exam) is not held in public and also encompasses fields in addition to the topic of the thesis. The Rigorosum is only common for doctoral degrees. Another term for an oral examination is Kolloquium, which generally refers to a usually public scientific discussion and is often used synonymously with Verteidigung.
In each case, what exactly is expected differs between universities and between faculties. Some universities also demand a combination of several of these forms.
Malaysia
Like the British model, the PhD or MPhil student is required to submit their thesis or dissertation for examination by two or three examiners. The first examiner is from the university concerned, the second examiner is from another local university and the third examiner is from a suitable foreign university (usually from Commonwealth countries). The choice of examiners must be approved by the university senate. In some public universities, a PhD or MPhil candidate may also have to show a number of publications in peer reviewed academic journals as part of the requirement. An oral viva is conducted after the examiners have submitted their reports to the university. The oral viva session is attended by the Oral Viva chairman, a rapporteur with a PhD qualification, the first examiner, the second examiner and sometimes the third examiner.
Branch campuses of British, Australian and Middle East universities in Malaysia use the respective models of the home campuses to examine their PhD or MPhil candidates.
Philippines
In the Philippines, a thesis is followed by an oral defense. In most universities, this applies to all bachelor, master, and doctorate degrees. However, the oral defense is held in once per semester (usually in the middle or by the end) with a presentation of revisions (so-called "plenary presentation") at the end of each semester. The oral defense is typically not held in public for bachelor and master oral defenses, however a colloquium is held for doctorate degrees.
Portugal
In Portugal, a thesis is examined with an oral defense, which includes an initial presentation by the candidate followed by an extensive question and answer session.
North America
In North America, the thesis defense or oral defense is the final examination for doctoral candidates, and sometimes for master's candidates.
The examining committee normally consists of the thesis committee, usually a given number of professors mainly from the student's university plus their primary supervisor, an external examiner (someone not otherwise connected to the university), and a chair person. Each committee member will have been given a completed copy of the dissertation prior to the defense, and will come prepared to ask questions about the thesis itself and the subject matter. In many schools, master's thesis defenses are restricted to the examinee and the examiners, but doctoral defenses are open to the public.
The typical format will see the candidate giving a short (20–40-minute) presentation of their research, followed by one to two hours of questions.
At some U.S. institutions, a longer public lecture (known as a "thesis talk" or "thesis seminar") by the candidate will accompany the defense itself, in which case only the candidate, the examiners, and other members of the faculty may attend the actual defense.
Russia and Ukraine
A student in Russia or Ukraine has to complete a thesis and then defend it in front of their department. Sometimes the defense meeting is made up of the learning institute's professionals and sometimes the students peers are allowed to view or join in. After the presentation and defense of the thesis, the final conclusion of the department should be that none of them have reservations on the content and quality of the thesis.
A conclusion on the thesis has to be approved by the rector of the educational institute. This conclusion (final grade so to speak) of the thesis can be defended/argued not only at the thesis council, but also in any other thesis council of Russia or Ukraine.
Spain
The former Diploma de estudios avanzados (DEA) lasted two years and candidates were required to complete coursework and demonstrate their ability to research the specific topics they have studied. From 2011 on, these courses were replaced by academic Master's programmes that include specific training on epistemology, and scientific methodology. After its completion, students are able to enroll in a specific PhD programme (programa de doctorado) and begin a dissertation on a set topic for a maximum time of three years (full-time) and five years (part-time). All students must have a full professor as an academic advisor (director de tesis) and a tutor, who is usually the same person.
A dissertation (tesis doctoral), with an average of 250 pages, is the main requisite along with typically one previously published journal article. Once candidates have published their written dissertations, they will be evaluated by two external academics (evaluadores externos) and subsequently it is usually exhibited publicly for fifteen natural days. After its approval, candidates must defend publicly their research before a three-member committee (tribunal) with at least one visiting academic: chair, secretary and member (presidente, secretario y vocal).
A typical public Thesis Defence (defensa) lasts 45 minutes and all attendants holding a doctoral degree are eligible to ask questions.
United Kingdom, Ireland and Hong Kong
In Hong Kong, Ireland and the United Kingdom, the thesis defense is called a (Latin for 'by live voice') examination (viva for short). A typical viva lasts for approximately 3 hours, though there is no formal time limit. Involved in the viva are two examiners and the candidate. Usually, one examiner is an academic from the candidate's own university department (but not one of the candidate's supervisors) and the other is an external examiner from a different university. Increasingly, the examination may involve a third academic, the 'chair'; this person, from the candidate's institution, acts as an impartial observer with oversight of the examination process to ensure that the examination is fair. The 'chair' does not ask academic questions of the candidate.
In the United Kingdom, there are only two or at most three examiners, and in many universities the examination is held in private. The candidate's primary supervisor is not permitted to ask or answer questions during the viva, and their presence is not necessary. However, some universities permit members of the faculty or the university to attend. At the University of Oxford, for instance, any member of the university may attend a DPhil viva (the university's regulations require that details of the examination and its time and place be published formally in advance) provided they attend in full academic dress.
Submission
A submission of the thesis is the last formal requirement for most students after the defense. By the final deadline, the student must submit a complete copy of the thesis to the appropriate body within the accepting institution, along with the appropriate forms, bearing the signatures of the primary supervisor, the examiners, and in some cases, the head of the student's department. Other required forms may include library authorizations (giving the university library permission to make the thesis available as part of its collection) and copyright permissions (in the event that the student has incorporated copyrighted materials in the thesis). Many large scientific publishing houses (e.g. Taylor & Francis, Elsevier) use copyright agreements that allow the authors to incorporate their published articles into dissertations without separate authorization.
Once all the paperwork is in order, copies of the thesis may be made available in one or more university libraries. Specialist abstracting services exist to publicize the content of these beyond the institutions in which they are produced. Many institutions now insist on submission of digitized as well as printed copies of theses; the digitized versions of successful theses are often made available online.
See also
Capstone course
Compilation thesis
Comprehensive examination
Dissertation Abstracts
Grey literature
Postgraduate education
Collection of articles
Academic journal
Academic publishing
Treatise
Explanatory notes
References
External links
en.wikibooks.org/wiki/ETD Guide Guide to electronic theses and dissertations on Wikibooks
Networked Digital Library of Theses and Dissertations (NDLTD)
EThOS Database Database of UK Doctoral theses available through the British Library
Academia
Educational assessment and evaluation
Grey literature
Rhetoric
Scientific documents | 0.761752 | 0.998553 | 0.760649 |
Action learning | Action Learning is an approach to problem solving that involves taking action and reflecting upon the results. This method is purported to help improve the problem-solving process and simplify the solutions developed as a result. The theory of Action Learning and its epistemological position were originally developed by Reg Revans, who applied the method to support organizational and business development initiatives and improve on problem solving efforts.
Action Learning is effective in developing a number of individual leadership and team problem-solving skills, and has become a component in many corporate and organizational leadership development programs. The strategy is advertised as being different from the "one size fits all" curricula that are characteristic of many training and development programs.
Overview
Action Learning is ideologically a cycle of "doing" and "reflecting" stages. In most forms of action learning, a coach is included and responsible for promoting and facilitating learning, as well as encouraging the team to be self-managing.
The Action Learning process includes:
An important and often complex problem
A diverse problem-solving team
An environment that promotes curiosity, inquiry, and reflection,
A requirement that talk be converted into action and, ultimately, a solution,
A collective commitment to learning.
History and Development
The action learning approach was originated by Reg Revans. Formative influences for Revan included his time working as a physicist at the University of Cambridge, wherein he noted the importance of each scientist describing their own ignorance, sharing experiences, and communally reflecting in order to learn. Revan used these experiences to further develop the method in the 1940s while working for the United Kingdom's National Coal Board, where he encouraged managers to meet together in small groups to share their experiences and ask each other questions about what they saw and heard. From these experiences Regev felt that conventional instructional methods were largely ineffective, and that individuals needed to be aware of their lack of relevant knowledge and be prepared to explore that ignorance with suitable questions and help from other people in similar positions.
Formula
Revans makes the pedagogical approach of Action Learning more precise in the opening chapter of his book which describes that "learning" is the result of combining "programmed knowledge" and "questioning", frequently abbreviated by the formula:
In this paradigm, "questioning" is intended to create insight into what people see, hear or feel, and may be divided into multiple categories of question, including open and closed questions. Although questioning is considered the cornerstone of the method, more relaxed formulations have enabled Action Learning to gain use in many countries all over the world, including the United States, Canada, Latin America, the Middle East, Africa, and Asia-Pacific.
The International Management Centres Association and Michael Marquardt have both proposed an extension to this formula with the addition of R for "reflection": .
This additional element emphasizes the point that "great questions" should evoke thoughtful reflections while considering the current problem, the desired goal, designing strategies, developing action or implementation plans, or executing action steps that are components of the implementation plan.
Questioning in Action Learning
Action Learning purports that one of the keys to effective problem solving is asking the 'right question'. When asked to the right people at the right time, these questions help obtaining the necessary information. The Action Learning process, which primarily uses a questioning approach, can be more helpful than offering advice because it assumes that each person has the capacity to find their own answers.
Action-based learning questions are questions that are based on the approach of action learning where one solves real-life problems that involve taking action and reflecting upon the results. As opposed to asking a question to gain information, in Action Learning the purpose of questioning is to help someone else explore new options and perspectives, and reflect in order to make better decisions.
Types of questions
Closed questions
Closed questions do not allow the respondents to develop their response, generally by limiting respondents with a limited set of possible answers. Answers to closed questions are often monosyllabic words or short phrases, including "yes" and "no".
While closed questions typically have simple answers, they should not be interpreted as simple questions. Closed questions can range widely in complexity, and may force the respondent to think significantly before answering. The purposes of closed questions include obtaining facts, initiating the conversation, and maintaining conversational control for the questioner.Examples of closed questions:
"What is your name?"
"What color is the sky today?"
"When two quantities are dependent on each other, does an increase in one always leads to an increase in the other?"
Open questions
Open questions allow the respondent to expand or explore in their response, and do not have a single correct response. In the framework of Action Learning, this gives the respondent the freedom to discover new ideas, consider different possibilities, and decide on the course of action which is right for them.
Open-ended questions are not always long, and shorter questions often have equal or greater impact than longer ones. When using the Action Learning approach, it is important to be aware of one's tone and language. The goal is usually to ask challenging questions, or to challenge the respondent's perspective. The purposes of open questions include encouraging discussion and reflection, expanding upon a closed question, and giving control of the conversation to the respondent.Examples of open questions:
"Why do you think that might have happened?"
"How did that make you feel?"
"What problems do you think this strategy could cause?"
Use in organizations
It is applied by using the Action Learning question method to support organizational development. Action Learning is practiced by a wide community of businesses, governments, non-profits, and educational institutions. Organizations may also use Action Learning in the virtual environment. This is a cost-effective solution that enables the widespread use of Action Learning at all levels of an organization. Action e-Learning provides a viable alternative for organizations interested in adapting the action learning process for online delivery with groups where the members are not co-located.
Robert Kramer pioneered the use of Action Learning for officials in the United States government, and at the European Commission in Brussels and Luxembourg. He also introduced Action Learning to scientists at the European Environment Agency in Copenhagen, to officials of the Estonian government at the State Chancellery in Tallinn, Estonia, and to students of communication and media studies at Corvinus University of Budapest.
Models of Action Learning
The influence of Revans's Action Learning Formula can be seen today in many leadership and organization development initiatives in corporate training and executive education institutes. Since the 1940s, several developments to Revan's original training model have been created. As with other pedagogical approaches, practitioners have built on Revans' original work and adapted tenets to accommodate their specific needs.
Action Reflection Learning and the MiL model
One such branch of Action Learning is Action Reflection Learning (ARL), which originated in Sweden among educators and consultants under the guidance of Lennart Rohlin of the MiL Institute in the 1970s. Using the "MiL model," ARL gained momentum in the field of Leadership in International Management.
The main differences between Revans' approach to action learning and the 'MiL Model' in the 1980s are:
The role of a project team advisor (later called Learning Coach),
The use of "team projects" rather than individual challenges,
The duration of the sessions, which is more flexible in ARL designs.
The MiL model and ARL evolved as practitioners responded to diverse needs and restrictions—MiL practitioners varied the number and duration of the sessions, the type of project selected, the role of the Learning Coach and the style of their interventions. In 2004, Isabel Rimanoczy researched and codified the ARL methodology, identifying 16 elements and 10 underlying principles.
The World Institute for Action Learning model
The World Institute for Action Learning (WIAL) model was developed by Michael Marquardt, Skipton Leonard, Bea Carson and Arthur Freedman. The model starts with two simple "ground rules" that ensure that statements are related to questions, and grant authority to the coach in order to promote learning. Team members may develop additional ground rules, norms, and roles as they deem necessary or advantageous. Addressing Revans' concern that a coach's over-involvement in the problem-solving process will engender dependency, WIAL coaches only ask questions that encourage team members to reflect on the team's behavior (what is working, can be improved, or done differently) in efforts to improve learning and, ultimately, performance.
Executive Action Learning (EAL) Model
The action learning model has evolved from an organizational development tool led by learning and development (L&D) managers to organizational alignment and performance tool led by executives, where CEOs and their executive teams facilitate action-learning sessions to align the organizational objectives at various organizational levels and departments. One such example is the Executive Action-Learning (EAL) Model which originated in the United States in 2005.
The EAL model differs from the traditional organizational training methods by shifting the focus from professor-led, general knowledge memorization and presentations to executive-led and project-based experiential reflection and problem-solving as the major learning tool.
EAL makes the following executive education paradigm focus shifts:
From academic (theoretical)-driven education to experiential-driven education
From ad hoc courses to integrated organization development
From individual knowledge to the collective intelligence
From divisional training to organizational alignment initiative
From teacher-driven learning to student-driven learning
From generic training courses to customized training programs
From passive (listening) to active (doing) learning
From a teaching process to an advisory process
From lecturing to coaching
From memorizing to brainstorming
From subjective thinking to critical thinking
From conventional thinking to creative thinking
From competitive learning to collaborative learning
From problem-focus to solution-focus
From exams to project-based assessments
From knowledge transfer to knowledge creation
From learn-and-forget to sustained performance development
From human resources (training cost) to human capital (training investment)
From intangible benefits to measurable results using key performance indicators (KPIs)
"Unlearning" as a prerequisite for "learning"
The process of learning more creative ways of thinking, feeling, and being is achieved in Action Learning by reflecting on what is working now and on actions that can be improved. Action Learning is consistent with the principles of positive psychology and appreciative inquiry by encouraging team members to build on strengths and learn from challenges. In Action Learning, reflecting on what has and has not worked helps team members unlearn what doesn't work and develop new and improved ways to increase productivity moving forward.
Robert Kramer applies the theory of art, creativity and "unlearning" of the psychologist Otto Rank to his practice of Action Learning. In Kramer's work, Action Learning questions allow group members to "step out of the frame of the prevailing ideology," reflect on their assumptions and beliefs, and re-frame their choices. Through the lens of Otto Rank's work on understanding art and artists, Action Learning can be seen as the never-completed process of learning how to "step out of the frame" of the ruling mindset, and learning how to unlearn.
Role of Facilitator in Action Learning
An ongoing challenge of Action Learning has been to take productive action as well as to take the time necessary to capture the learning that result from reflecting on the results of taking action. Usually, the urgency of the problem or task decreases or eliminates the reflective time necessary for learning. As a consequence, more and more organizations have recognized the critical importance of an Action Learning coach or facilitator in the process, someone who has the authority and responsibility of creating time and space for the group to learn at the individual, group and organizational level.
There is controversy, however, about the need for an Action Learning coach. Revans was skeptical about the use of learning coaches and, in general, of interventionist approaches. He believed the Action Learning set Action Learning on its own. He also had a major concern that too much process facilitation would lead a group to become dependent on a coach or facilitator. Nevertheless, later in his development of the Action Learning method, Revans experimented with including a role that he described as a "supernumerary" that had many similarities to that of a facilitator or coach. Pedler distills Revans' thinking about the key role of the action learning facilitator as follows:(i) The initiator or "accoucheur": "No organisation is likely to embrace action learning unless there is some person within it ready to fight on its behalf. ...This useful intermediary we may call the accoucheur—the managerial midwife who sees that their organisation gives birth to a new idea...".
(ii) The set facilitator or "combiner":
"there may be a need when it (the set) is first formed for some supernumerary
brought into speed the integration of the set ...." but "Such a combiner ...must contrive that it (the set) achieves independence of them at the earliest possible moment...".
(iii) The facilitator of organizational learning or the "learning community" organiser:
"The most precious asset of any organization is the one most readily overlooked: its capacity to build upon its lived experience, to learn from its challenges and to turn in a better performance by inviting all and sundry to work out for themselves what that performance ought to be."Hale suggested that the facilitator role developed by Revans be incorporated into any standards for Action Learning facilitation accreditation. Hale also suggests the Action Learning facilitator role includes the functions of mobilizer, learning set adviser, and learning catalyst. To increase the reflective, learning aspect of Action Learning, many groups now adopt the practice or norm of focusing on questions rather than statements while working on the problem and developing strategies and actions.
Self-managed action learning is a variant of Action Learning that dispenses with the need for a facilitator of the action learning set, including in virtual and hybrid settings. There are a number of problems, however, with purely self-managed teams (i.e., with no coach). It has been noted that self-managing teams (such as task forces) seldom take the time to reflect on what they are doing or make efforts to identify key lessons learned from the process. Without reflection, team members are likely to import organizational or sub-unit cultural norms and familiar problem solving practices into the problem-solving process without explicitly testing their validity and utility. Team members employ assumptions, mental models, and beliefs about methods or processes that are seldom openly challenged, much less tested. As a result, teams often apply traditional problem solving methods to non-traditional, urgent, critical, and discontinuous problems. In addition, team members often "leap" from the initial problem statement to some form of brainstorming that they assume will produce a viable solution. These suggested solutions typically provoke objections, doubts, concerns, or reservations from other team members who advocate their own preferred solutions. The conflicts that ensue are generally both unproductive and time-consuming. As a result, self-managed teams, tend to split or fragment rather than develop into a cohesive, high-performing team.
Because of these typical characteristics of self-managing teams, many theorists and practitioners have argued that real and effective self-management in action learning requires coaches with the authority to intervene whenever they perceive an opportunity to promote learning or improve team performance. Without this facilitator role, there is no assurance that the team will make the time needed for the periodic, systemic, and strategic inquiry and reflection that is necessary for effective individual, team, and organizational learning.
Organizations and Community
A number of organizations sponsor events focusing on the implementation and improvement of Action Learning, including The Journal of Action Learning: Research & Practice, the World Institute of Action Learning Global Forum, the Global Forum on Executive Development and Business Driven Action Learning, and the Action Learning, Action Research Association World Congress. There are also LinkedIn interest groups devoted to Action Learning include WIAL Network, Action Learning Forum, International Foundation for Action Learning, Global Forum on Business Driven Action Learning and Executive Development, Learning Thru Action, and Action Research and Learning in Organizations.
See also
Action research
Action teaching
Experiential learning
Inquiry-based learning
Large-group capacitation
Learning cycle
Notes
Further reading
Boshyk, Yury, and Dilworth, Robert L. 2010. Action Learning and its Applications. Basingstoke, UK: Macmillan.
Boshyk, Yury. 2000. Business Driven Action Learning: Global Best Practices. Basingstoke, UK: Macmillan.
Boshyk, Yury. 2002. Action Learning Worldwide: Experiences of Leadership and Organizational Development. Basingstoke, UK: Macmillan.
Carrington, L. House Proud: Action Learning is Paying Dividends at Building Firm, People Management, 5 December 2002, pp 36–38.
Chambers, A. and Hale, R. 2007. Keep Walking: Leadership Learning in Action, MX Publishing; 2nd edition (9 November 2009), UK.
Collingham, B., Critten, P., Garnett, J. and Hale, R. (2007) A Partnership Approach to Developing and Accrediting Work Based Learning – Creating Successful Work Based Learning – Meeting the Skills Challenge for Performance Improvement, Inaugural Conference, British Institute for Learning and Development, Royal Society of Medicine, London 17 May 2007.
Crainer, Stuart. 1999. The 75 Greatest Management Decisions Ever Made. New York: AMACOM Publishing
Critten, P. & Hale, R. (2006) 'From Work Based/ Action learning to Action Research – Towards a Methodology for the Worker/ Practitioner researcher' The Work-based Learning Network of the Universities Association for Life-Long learning Annual Conference: 'Work Based Projects: The Worker as Researcher 24–25 April 2006 University of Northampton.
Dilworth, R. L., and Willis, V. 2003. Action Learning: Images and Pathways.
Freedman, A.M. & Leonard, H.S. 2013. Leading organizational change using action learning: What leadersh should know before committing to a consulting contract. Reston, VA: Learning Thru Action Press.
Kozubska, J & MacKenzie, B 2012. Differences and impact through action learning, Action Learning Research & Practice, 9 2, 1450164.
Leonard, H.S. & Freedman, A.M. 2013. Great solutions through action learning: success every time. Reston, VA: Learning Thru Action Press.
McGill & N. Beech (Eds) Reflective learning in practice, Aldershot, Gower.
Hale, Richard. 2014. Fundamentals of Action Learning, Training Journal, August, 2014, pp. 30–36.
Hale, Richard. 2014. Fundamentals of Action Learning: Knowledge Mapping, Training Journal, September, 2014.
Hale, Richard. 2014. Fundamentals of Action Learning: Mobilising Action Learning, Training Journal, October, 2014.
Marquardt, M. J. 1999. Action learning in action. Palo Alto, CA:Davies-Black.
Marquardt, M. J. 2004. Harnessing the power of action learning. T D, 58(6): 26–32.
Marquardt, M.J. 2011. Optimizing the power of action learning. Boston: Nicholas Brealey Publishing.
Marquardt, M.J. & Roland Yeo (2012). Breakthrough Problem Solving with Action Learning: Concepts and Cases. Stanford, CA: Stanford University Press.
Martinsons, M.G. 1998. MBA action learning projects. Hong Kong University Press.
O'Neil, J. and Marsick, V.J. 2007. Understanding Action Learning. NY: AMACOM Publishing
Pedler, M., (Ed.). 1991. Action learning in practice (2nd ed.). Aldershot, UK: Gower.
Pedler, M. 1996. Action learning for managers. London: Lemos and Crane.
Raelin, J. A. 1997. Action learning and action science: Are they different? Organizational Dynamics, 26(1): 21–34.
Raelin, J. A. 2000. Work-based learning: The new frontier of management development. Reading, MA: Addison-Wesley.
Rimanoczy, I., and Turner, E. 2008. Action Reflection Learning: solving real business problems by connecting learning with earning. US, Davies-Black Publishing.
Rohlin, L., Turner, E. and others. 2002. Earning while Learning in Global Leadership: the Volvo MiL Partnership. Sweden, MiL Publishers AB.
Smith, S. & Smith, L. (2017) Assessing the value of action learning for social enterprises and charities. Action Learning: Research and Practice (14)3: 230-242
Sawchuk, P. H. 2003. Adult learning and technology in working class life. New York: Cambridge University Press.
External links
Learning methods
Educational practices
Test equipment | 0.782691 | 0.971819 | 0.760634 |
Behavioral modernity | Behavioral modernity is a suite of behavioral and cognitive traits believed to distinguish current Homo sapiens from other anatomically modern humans, hominins, and primates. Most scholars agree that modern human behavior can be characterized by abstract thinking, planning depth, symbolic behavior (e.g., art, ornamentation), music and dance, exploitation of large game, and blade technology, among others.
Underlying these behaviors and technological innovations are cognitive and cultural foundations that have been documented experimentally and ethnographically by evolutionary and cultural anthropologists. These human universal patterns include cumulative cultural adaptation, social norms, language, and extensive help and cooperation beyond close kin.
Within the tradition of evolutionary anthropology and related disciplines, it has been argued that the development of these modern behavioral traits, in combination with the climatic conditions of the Last Glacial Period and Last Glacial Maximum causing population bottlenecks, contributed to the evolutionary success of Homo sapiens worldwide relative to Neanderthals, Denisovans, and other archaic humans.
Debate continues as to whether anatomically modern humans were behaviorally modern as well. There are many theories on the evolution of behavioral modernity. These approaches tend to fall into two camps: cognitive and gradualist. The Later Upper Paleolithic Model theorizes that modern human behavior arose through cognitive, genetic changes in Africa abruptly around 40,000–50,000 years ago around the time of the Out-of-Africa migration, prompting the movement of some modern humans out of Africa and across the world.
Other models focus on how modern human behavior may have arisen through gradual steps, with the archaeological signatures of such behavior appearing only through demographic or subsistence-based changes. Many cite evidence of behavioral modernity earlier (by at least about 150,000–75,000 years ago and possibly earlier) namely in the African Middle Stone Age. Anthropologists Sally McBrearty and Alison S. Brooks have been notable proponents of gradualism—challenging Europe-centered models by situating more change in the African Middle Stone Age—though this model is more difficult to substantiate due to the general thinning of the fossil record as one goes further back in time.
Definition
To classify what should be included in modern human behavior, it is necessary to define behaviors that are universal among living human groups. Some examples of these human universals are abstract thought, planning, trade, cooperative labor, body decoration, and the control and use of fire. Along with these traits, humans possess much reliance on social learning. This cumulative cultural change or cultural "ratchet" separates human culture from social learning in animals. In addition, a reliance on social learning may be responsible in part for humans' rapid adaptation to many environments outside of Africa. Since cultural universals are found in all cultures, including isolated indigenous groups, these traits must have evolved or have been invented in Africa prior to the exodus.
Archaeologically, a number of empirical traits have been used as indicators of modern human behavior. While these are often debated a few are generally agreed upon. Archaeological evidence of behavioral modernity includes:
Burial
Fishing
Figurative art (cave paintings, petroglyphs, dendroglyphs, figurines)
Use of pigments (such as ochre) and jewelry for decoration or self-ornamentation
Using bone material for tools
Transport of resources over long distances
Blade technology
Diversity, standardization, and regionally distinct artifacts
Hearths
Composite tools
Critiques
Several critiques have been placed against the traditional concept of behavioral modernity, both methodologically and philosophically. Anthropologist John Shea outlines a variety of problems with this concept, arguing instead for "behavioral variability", which, according to the author, better describes the archaeological record. The use of trait lists, according to Shea, runs the risk of taphonomic bias, where some sites may yield more artifacts than others despite similar populations; as well, trait lists can be ambiguous in how behaviors may be empirically recognized in the archaeological record. In particular, Shea cautions that population pressure, cultural change, or optimality models, like those in human behavioral ecology, might better predict changes in tool types or subsistence strategies than a change from "archaic" to "modern" behavior. Some researchers argue that a greater emphasis should be placed on identifying only those artifacts which are unquestionably, or purely, symbolic as a metric for modern human behavior.
Since 2018, recent dating methods utilized on various cave art sites in Spain and France have shown that Neanderthals performed symbolic artistic expression, consisting of red "lines, dots, and hand stencils" found in caves, prior to contact with anatomically modern humans. This is contrary to previous suggestions that Neanderthals lacked these capabilities.
Theories and models
Late Upper Paleolithic Model or "Upper Paleolithic Revolution"
The Late Upper Paleolithic Model, or Upper Paleolithic Revolution, refers to the idea that, though anatomically modern humans first appear around 150,000 years ago (as was once believed), they were not cognitively or behaviorally "modern" until around 50,000 years ago, leading to their expansion out of Africa and into Europe and Asia. These authors note that traits used as a metric for behavioral modernity do not appear as a package until around 40–50,000 years ago. Anthropologist Richard Klein specifically describes that evidence of fishing, tools made from bone, hearths, significant artifact diversity, and elaborate graves are all absent before this point. According to both Shea and Klein, art only becomes common beyond this switching point, signifying a change from archaic to modern humans. Most researchers argue that a neurological or genetic change, perhaps one enabling complex language, such as FOXP2, caused this revolutionary change in humans. The role of FOXP2 as a driver of evolutionary selection has been called into question following recent research results.
Building on the FOXP2 gene hypothesis, cognitive scientist Philip Lieberman has argued that proto-language behaviour existed prior to 50,000 BP, albeit in a more primitive form. Lieberman has advanced fossil evidence, such as neck and throat dimensions, to demonstrate that so-called “anatomically modern” humans from 100,000 BP continued to evolve their SVT (supralaryngeal vocal tract), which already possessed a horizontal portion (SVTh) capable of producing many phonemes which were mostly consonants. According to his theory, Neanderthals and early Homo sapiens would have been able to communicate using sounds and gestures.
From 100,000 BP, Homo sapiens necks continued to lengthen to a point, by around 50,000 BP, where Homo sapiens necks were long enough to accommodate a vertical portion to their SVT (SVTv), which is now a universal trait among humans. This SVTv enabled the enunciation of quantal vowels: [i]; [u]; and [a]. These quantal vowels could then be immediately put to use by the already sophisticated neuro-motor-control features of the FOXP2 gene to generate more nuanced sounds and in effect increase by orders of magnitude the number of distinct sounds that can be produced, allowing for fully symbolic language.
Goody (1986) draws an analogy between the development of spoken language and that of writing: the shift from pictographic or ideographic symbols into a fully abstract logographic writing system (such as hieroglyphics), or from a logoprahic system into an abjad or alphabet, led to dramatic changes in human civilization.
Alternative models
Contrasted with this view of a spontaneous leap in cognition among ancient humans, some anthropologists like Alison S. Brooks, primarily working in African archaeology, point to the gradual accumulation of "modern" behaviors, starting well before the 50,000-year benchmark of the Upper Paleolithic Revolution models. Howiesons Poort, Blombos, and other South African archaeological sites, for example, show evidence of marine resource acquisition, trade, the making of bone tools, blade and microlithic technology, and abstract ornamentation at least by 80,000 years ago. Given evidence from Africa and the Middle East, a variety of hypotheses have been put forth to describe an earlier, gradual transition from simple to more complex human behavior. Some authors have pushed back the appearance of fully modern behavior to around 80,000 years ago or earlier in order to incorporate the South African data.
Others focus on the slow accumulation of different technologies and behaviors across time. These researchers describe how anatomically modern humans could have been cognitively the same, and what we define as behavioral modernity is just the result of thousands of years of cultural adaptation and learning. Archaeologist Francesco d'Errico, and others, have looked at Neanderthal culture, rather than early human behavior exclusively, for clues into behavioral modernity. Noting that Neanderthal assemblages often portray traits similar to those listed for modern human behavior, researchers stress that the foundations for behavioral modernity may in fact, lie deeper in our hominin ancestors. If both modern humans and Neanderthals express abstract art and complex tools then "modern human behavior" cannot be a derived trait for our species. They argue that the original "human revolution" theory reflects a profound Eurocentric bias. Recent archaeological evidence, they argue, proves that humans evolving in Africa some 300,000 or even 400,000 years ago were already becoming cognitively and behaviourally "modern". These features include blade and microlithic technology, bone tools, increased geographic range, specialized hunting, the use of aquatic resources, long-distance trade, systematic processing and use of pigment, and art and decoration. These items do not occur suddenly together as predicted by the "human revolution" model, but at sites that are widely separated in space and time. This suggests a gradual assembling of the package of modern human behaviours in Africa, and its later export to other regions of the Old World.
Between these extremes is the view—currently supported by archaeologists Chris Henshilwood, Curtis Marean, Ian Watts and others—that there was indeed some kind of "human revolution" but that it occurred in Africa and spanned tens of thousands of years. The term "revolution," in this context, would mean not a sudden mutation but a historical development along the lines of the industrial revolution or the Neolithic revolution. In other words, it was a relatively accelerated process, too rapid for ordinary Darwinian "descent with modification" yet too gradual to be attributed to a single genetic or other sudden event. These archaeologists point in particular to the relatively explosive emergence of ochre crayons and shell necklaces, apparently used for cosmetic purposes. These archaeologists see symbolic organisation of human social life as the key transition in modern human evolution. Recently discovered at sites such as Blombos Cave and Pinnacle Point, South Africa, pierced shells, pigments and other striking signs of personal ornamentation have been dated within a time-window of 70,000–160,000 years ago in the African Middle Stone Age, suggesting that the emergence of Homo sapiens coincided, after all, with the transition to modern cognition and behaviour. While viewing the emergence of language as a "revolutionary" development, this school of thought generally attributes it to cumulative social, cognitive and cultural evolutionary processes as opposed to a single genetic mutation.
A further view, taken by archaeologists such as Francesco d'Errico and João Zilhão, is a multi-species perspective arguing that evidence for symbolic culture, in the form of utilised pigments and pierced shells, are also found in Neanderthal sites, independently of any "modern" human influence.
Cultural evolutionary models may also shed light on why although evidence of behavioral modernity exists before 50,000 years ago, it is not expressed consistently until that point. With small population sizes, human groups would have been affected by demographic and cultural evolutionary forces that may not have allowed for complex cultural traits. According to some authors, until population density became significantly high, complex traits could not have been maintained effectively. Some genetic evidence supports a dramatic increase in population size before human migration out of Africa. High local extinction rates within a population also can significantly decrease the amount of diversity in neutral cultural traits, regardless of cognitive ability.
Archaeological evidence
Africa
Research from 2017 indicates that Homo sapiens originated in Africa between around 350,000 and 260,000 years ago. There is some evidence for the beginning of modern behavior among early African H. sapiens around that period.
Before the Out of Africa theory was generally accepted, there was no consensus on where the human species evolved and, consequently, where modern human behavior arose. Now, however, African archaeology has become extremely important in discovering the origins of humanity. The first Cro-Magnon expansion into Europe around 48,000 years ago is generally accepted as already "modern", and it is now generally believed that behavioral modernity appeared in Africa before 50,000 years ago, either significantly earlier, or possibly as a late Upper Paleolithic "revolution" soon before which prompted migration out of Africa.
A variety of evidence of abstract imagery, widened subsistence strategies, and other "modern" behaviors have been discovered in Africa, especially South, North, and East Africa. The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was dated to be around 77,000 and 100,000 to 75,000 years old. Ostrich egg shell containers engraved with geometric designs dating to 60,000 years ago were found at Diepkloof, South Africa. Beads and other personal ornamentation have been found from Morocco which might be as much as 130,000 years old; as well, the Cave of Hearths in South Africa has yielded a number of beads dating from significantly prior to 50,000 years ago, and shell beads dating to about 75,000 years ago have been found at Blombos Cave, South Africa.
Specialized projectile weapons as well have been found at various sites in Middle Stone Age Africa, including bone and stone arrowheads at South African sites such as Sibudu Cave (along with an early bone needle also found at Sibudu) dating approximately 72,000–60,000 years ago on some of which poisons may have been used, and bone harpoons at the Central African site of Katanda dating to about 90,000 years ago. Evidence also exists for the systematic heat treating of silcrete stone to increase its flake-ability for the purpose of toolmaking, beginning approximately 164,000 years ago at the South African site of Pinnacle Point and becoming common there for the creation of microlithic tools at about 72,000 years ago.
In 2008, an ochre processing workshop likely for the production of paints was uncovered dating to c. 100,000 years ago at Blombos Cave, South Africa. Analysis shows that a liquefied pigment-rich mixture was produced and stored in the two abalone shells, and that ochre, bone, charcoal, grindstones, and hammer-stones also formed a composite part of the toolkits. Evidence for the complexity of the task includes procuring and combining raw materials from various sources (implying they had a mental template of the process they would follow), possibly using pyrotechnology to facilitate fat extraction from bone, using a probable recipe to produce the compound, and the use of shell containers for mixing and storage for later use. Modern behaviors, such as the making of shell beads, bone tools and arrows, and the use of ochre pigment, are evident at a Kenyan site by 78,000–67,000 years ago. Evidence of early stone-tipped projectile weapons (a characteristic tool of Homo sapiens), the stone tips of javelins or throwing spears, were discovered in 2013 at the Ethiopian site of Gademotta, and date to around 279,000 years ago.
Expanding subsistence strategies beyond big-game hunting and the consequential diversity in tool types has been noted as signs of behavioral modernity. A number of South African sites have shown an early reliance on aquatic resources from fish to shellfish. Pinnacle Point, in particular, shows exploitation of marine resources as early as 120,000 years ago, perhaps in response to more arid conditions inland. Establishing a reliance on predictable shellfish deposits, for example, could reduce mobility and facilitate complex social systems and symbolic behavior. Blombos Cave and Site 440 in Sudan both show evidence of fishing as well. Taphonomic change in fish skeletons from Blombos Cave have been interpreted as capture of live fish, clearly an intentional human behavior.
Humans in North Africa (Nazlet Sabaha, Egypt) are known to have dabbled in chert mining, as early as ≈100,000 years ago, for the construction of stone tools.
Evidence was found in 2018, dating to about 320,000 years ago, at the Kenyan site of Olorgesailie, of the early emergence of modern behaviors including: long-distance trade networks (involving goods such as obsidian), the use of pigments, and the possible making of projectile points. It is observed by the authors of three 2018 studies on the site that the evidence of these behaviors is approximately contemporary to the earliest known Homo sapiens fossil remains from Africa (such as at Jebel Irhoud and Florisbad), and they suggest that complex and modern behaviors had already begun in Africa around the time of the emergence of anatomically modern Homo sapiens.
In 2019, further evidence of early complex projectile weapons in Africa was found at Aduma, Ethiopia, dated 100,000–80,000 years ago, in the form of points considered likely to belong to darts delivered by spear throwers.
Olduvai Hominid 1 wore facial piercings.
Europe
While traditionally described as evidence for the later Upper Paleolithic Model, European archaeology has shown that the issue is more complex. A variety of stone tool technologies are present at the time of human expansion into Europe and show evidence of modern behavior. Despite the problems of conflating specific tools with cultural groups, the Aurignacian tool complex, for example, is generally taken as a purely modern human signature. The discovery of "transitional" complexes, like "proto-Aurignacian", have been taken as evidence of human groups progressing through "steps of innovation". If, as this might suggest, human groups were already migrating into eastern Europe around 40,000 years and only afterward show evidence of behavioral modernity, then either the cognitive change must have diffused back into Africa or was already present before migration.
In light of a growing body of evidence of Neanderthal culture and tool complexes some researchers have put forth a "multiple species model" for behavioral modernity. Neanderthals were often cited as being an evolutionary dead-end, apish cousins who were less advanced than their human contemporaries. Personal ornaments were relegated as trinkets or poor imitations compared to the cave art produced by H. sapiens. Despite this, European evidence has shown a variety of personal ornaments and artistic artifacts produced by Neanderthals; for example, the Neanderthal site of Grotte du Renne has produced grooved bear, wolf, and fox incisors, ochre and other symbolic artifacts. Although few and controversial, circumstantial evidence of Neanderthal ritual burials has been uncovered. There are two options to describe this symbolic behavior among Neanderthals: they copied cultural traits from arriving modern humans or they had their own cultural traditions comparative with behavioral modernity. If they just copied cultural traditions, which is debated by several authors, they still possessed the capacity for complex culture described by behavioral modernity. As discussed above, if Neanderthals also were "behaviorally modern" then it cannot be a species-specific derived trait.
Asia
Most debates surrounding behavioral modernity have been focused on Africa or Europe but an increasing amount of focus has been placed on East Asia. This region offers a unique opportunity to test hypotheses of multi-regionalism, replacement, and demographic effects. Unlike Europe, where initial migration occurred around 50,000 years ago, human remains have been dated in China to around 100,000 years ago. This early evidence of human expansion calls into question behavioral modernity as an impetus for migration.
Stone tool technology is particularly of interest in East Asia. Following Homo erectus migrations out of Africa, Acheulean technology never seems to appear beyond present-day India and into China. Analogously, Mode 3, or Levallois technology, is not apparent in China following later hominin dispersals. This lack of more advanced technology has been explained by serial founder effects and low population densities out of Africa. Although tool complexes comparative to Europe are missing or fragmentary, other archaeological evidence shows behavioral modernity. For example, the peopling of the Japanese archipelago offers an opportunity to investigate the early use of watercraft. Although one site, Kanedori in Honshu, does suggest the use of watercraft as early as 84,000 years ago, there is no other evidence of hominins in Japan until 50,000 years ago.
The Zhoukoudian cave system near Beijing has been excavated since the 1930s and has yielded precious data on early human behavior in East Asia. Although disputed, there is evidence of possible human burials and interred remains in the cave dated to around 34–20,000 years ago. These remains have associated personal ornaments in the form of beads and worked shell, suggesting symbolic behavior. Along with possible burials, numerous other symbolic objects like punctured animal teeth and beads, some dyed in red ochre, have all been found at Zhoukoudian. Although fragmentary, the archaeological record of eastern Asia shows evidence of behavioral modernity before 50,000 years ago but, like the African record, it is not fully apparent until that time.
See also
Anatomically modern human
Archaic Homo sapiens
Blombos Cave
Cultural universal
Dawn of Humanity (film)
Evolution of human intelligence
Female cosmetic coalitions
FOXP2 and human evolution
Human evolution
List of Stone Age art
Origin of language
Origins of society
Prehistoric art
Prehistoric music
Paleolithic religion
Recent African origin
Sibudu Cave
Sociocultural evolution
Symbolism (disambiguation)
Symbolic culture
Timeline of evolution
References
External links
Steven Mithen (1999), The Prehistory of the Mind: The Cognitive Origins of Art, Religion and Science, Thames & Hudson, .
Artifacts in Africa Suggest An Earlier Modern Human
Tools point to African origin for human behaviour
Key Human Traits Tied to Shellfish Remains, nytimes 2007/10/18
"Python Cave" Reveals Oldest Human Ritual, Scientists Suggest
Human Timeline (Interactive) – Smithsonian, National Museum of Natural History (August 2016).
Anthropology
Anatomically modern humans
Modernity
Upper Paleolithic
Human evolution
Evolutionary biology
Evolutionary psychology | 0.763188 | 0.996595 | 0.76059 |
Narrative paradigm | Narrative paradigm is a communication theory conceptualized by 20th-century communication scholar Walter Fisher. The paradigm claims that all meaningful communication occurs via storytelling or reporting of events. Humans participate as storytellers and observers of narratives. This theory further claims that stories are more persuasive than arguments. Essentially the narrative paradigm helps us to explain how humans are able to understand complex information through narrative.
Background
The Narrative Paradigm is a theory that suggests that human beings are natural storytellers and that a good story is more convincing than a good argument. Walter Fisher developed this theory as a solution making cohesive arguments. Fisher conceptualized the paradigm as a way to combat issues in the public sphere. The problem was that human beings were unable to make cohesive traditional arguments. At the time, the rational world paradigm was the theory used to satisfy public controversies. He believed that stories have the power to include a beginning, middle, and end of an argument and that the rational world paradigm fails to be effective in sensemaking.
Fisher uses the term paradigm rather than theory, meaning a paradigm is broader than a theory. Fisher stated, "There is no genre, including technical communication, that is not an episode in the story of life." For this reason, Fisher thought narration to be the ultimate metaphor to encompass the human experience.
Fisher believed that humans are not rational and proposed that the narrative is the basis of communication. Fisher notes that reasoning is achieved through "all sorts of symbolic action." According to this viewpoint, people communicate by telling/observing a compelling story rather than by producing evidence or constructing a logical argument. The narrative paradigm is purportedly all-encompassing, allowing all communication to be looked at as a narrative even though it may not conform to the traditional literary requirements of a narrative. He states:
Humans see the world as a set of stories. Each accepts stories that match his or her values and beliefs, understood as common sense.
Although people claim that their decisions are rational, incorporating history, culture, and perceptions about the other people involved, all of these are subjective and incompletely understood.
Narrative rationality requires stories to be probable, coherent and to exhibit fidelity.
Storytelling is one of the first language skills that children develop. It is universal across cultures and time.
Rational world paradigm
Walter Fisher conceptualized the Narrative Paradigm in direct contrast to the Rational World Paradigm. "Fisher's interest in narrative developed out of his conclusion that the dominant model for explaining human communication—the rational-world paradigm—was inadequate." Rational World Paradigm suggest that an argument is most persuasive when it is logical. This theory is based on the teachings of Plato and Aristotle.
According to Aristotle, some statements are superior to others by virtue of their relationship to true knowledge. This view claims that:
People are essentially thinking beings, basing their knowledge on evidence-based reasoning.
Rational argument reflects knowledge and understanding, and how the case is made. These qualities determine whether the argument is accepted, so long as the form matches the forum, which might be scientific, legal, philosophical, etc.
The world is a set of logical puzzles that can be solved through reason.
Narrative rationality
Narrative rationality requires coherence and fidelity, which contribute to judgments about reasons.
Coherence
Narrative coherence is the degree to which a story makes sense. Coherent stories are internally consistent, with sufficient detail, strong characters, and free of significant surprises. The ability to assess coherence is learned and improves with experience. Individuals assess a story's adherence by comparing it with similar stories. The ultimate test of narrative sense is whether the characters act reliably. If figures show continuity throughout their thoughts, motives, and actions, acceptance increases. However, characters behaving uncharacteristically destroy acceptance.
Fidelity
Narrative fidelity is the degree to which a story fits into the observer's experience with other accounts. How the experience of a story rings true with past stories they know to be true in their lives. Stories with fidelity may influence their beliefs and values.
Fisher set five criteria that affect a story's narrative fidelity. The first of the requirements are the values which are embedded in the story. The second of the elements is the connection between the story and the espoused value. The third of the criteria is the possible outcomes that would accrue to people adhering to the espoused values. The last two are firstly the consistency of the narrative's values with the observer's values and lastly the extent to which the story’s values represent the highest values possible in human experience.
Evaluation of reasoning systems
Fisher's narrative paradigm offers an alternative to Aristotelian analysis, which dominates the field of rhetorical thinking. Narratives do not require training or expertise to evaluate. Common sense assesses narrative coherence and fidelity. Busselle and Bilandzic distinguish narrative rationality from realism, writing "It is remarkable that the power of narrative is not diminished by readers’ or viewers’ knowledge that the story is invented. On the contrary, successful stories—those that engage us most—often are both fictional and unrealistic."
Alternatively, Foucault claimed that communications systems formed through the savoir and pouvoir (knowledge and power) of the hierarchies that control access to the discourses. Hence, criteria for assessing the reliability and completeness of evidence, and whether the pattern of reasoning is sound are not absolutes but defined over time by those in positions of authority. This is particularly significant when the process of thinking includes values and policy in addition to empirical data.
The narrative paradigm instead asserts that any individual can judge a story's merits as a basis for belief and action.
Narration affects every aspect of each individual's life in verbal and nonverbal bids for someone to believe or act in a certain way. Even when a message appears abstract—i.e., the language is literal and not figurative—it is narration. This is because it is embedded in the storyteller's ongoing story and it invites observers to assess its value for their own lives.
Narrative rationality and narrative emotion are complementary within a narrative theory. The former considers how effectively the story conveys its meaning, as well as its implications. The latter considers the emotional reactions of the story's observers. Narrative emotion is an emotion felt for the sake of someone, or something, else.
Applications
Narrative theory is an assessment framework within various fields of communication. Those who use narrative theory within their research refer to it as a general way of viewing communication. The narrative paradigm is generally considered an interpretative theory of communication. It is an especially useful theory for teaching qualitative research methods.
Fisher’s theory has been considered for domains ranging from organizational communication to family interaction, to racism, and to advertising. McNamara proposed that the narrative paradigm can be used with military storytelling to enhance the perception of the United States armed services. Stutts and Barker, of Virginia Commonwealth University, proposed that the Narrative Paradigm can be used to evaluate if a company's brand will be well received by consumers, by determining if the created narrative has coherence and fidelity. Other researchers proposed using the narrative paradigm to evaluate ethical standards in advertising. Roberts used the narrative paradigm as a way to better understand the use of narrative in folklore. Hobart proposed using narrative theory as a way to interpret urban legends and other kinds of hoaxes.
A study tested the effects of narrative suggestions on paranormal belief. Recall that Fisher's paradigm posits that a good story is more convincing than a good argument. This was put to the test by examining the combined effects of source credibility, narrative, and message modality based on Fisher's idea of narrative rationality. Participants were presented a fabricated news story about strange noises being heard in a nearby science lab. One set of the narratives included an explanation of natural causes for the noise. Another set were explained using a paranormal verbiage in the explanation. Additionally, the main character of the story was either presented as a child witness, a university student, or a scientist with the hypothesis that a possible disparagement in credibility could be a factor. This variation was subsequently utilized to determine if source credibility would affect the narrative suggestion. The study found that the belief in the paranormal narrative was positively correlated when narrative coherence and narrative fidelity (narrative rationality) were strong, regardless of message modality. It was also determined that source credibility had a statistically significant impact on the outcome of belief in the paranormal narrative.
Narrative paradigm is also applicable when assessing multinational working relationships. Global interactions between groups with different backgrounds have the tendency to hinder group progress and building relationships. Over the past two decades, scholars conceptualize diversity in the workplace using several different theories. As companies continue to diversify, businesses look for communication models to help manage the complex structure of human relationships. Narrative paradigm serves as a communication technique that shows companies how the application of a good story could provide benefits in the workforce. Storytelling is a cross-cultural solution that establishes credibility and trust in relationships between workers.
Narrative and politics
Smith in 1984 conducted one example of a study that used narrative theory more directly. Smith looked at the fidelity and coherence of narratives presented as Republican and Democratic Party platforms in the United States and found that despite apparent differences, each party was able to maintain integrity and fidelity by remaining consistent in both structure and overarching party values.
Narrative and health communication
A study claimed that narrative features could be strategically altered by health communicators to affect the reader's identification. It found that similarities between the reader and the narrative's protagonist, but not the narrator's point of view, has a direct impact on the narrative's persuasiveness.
Narrative and branding
Narrative processing can create or enhance connections between a brand and an audience. Companies and business use stories or brands that suggest a story to produce brand loyalty. Businesses invest heavily in creating a good story through advertising and public relations. In brand development, many marketers focus on defining a brand persona (typical user) before constructing a narrative for that brand. Character traits such as honesty, curiosity, flexibility and determination become embedded in the persona. Commitment to the associated behavioral implications can help the brand maintain consistency with its narrative.
Narrative and law
A growing number of legal scholars contend that narrative persuades in law. In one study, judges tended to prefer legal briefs taking a storytelling approach to those that do not. In response, legal scholars have applied narrative techniques to legal persuasion and even legal communication. Scholars in this area commonly refer to this application as "Applied Legal Storytelling." Legal storytelling in courtrooms requires a formalization of the narratives presented. This is achieved through the use of first- and third-party perspectives of a narrative to mitigate impartial reporting of the story.
Criticism
Critics of the narrative paradigm mainly contend that it is not as universally applicable as Fisher suggests. For example, Rowland asserted that it should be applied strictly to communication that fits classic narrative patterns to avoid undermining its credibility.
Other critiques include issues of conservative bias. Kirkwood stated that Fisher's logic of good reasons focuses only on prevailing issues but does not see all the ways that stories can promote social change. In some ways, both Kirkwood and Fisher agree that this observation is more of an extension to the theory than a critique.
Stroud considered "multivalent" narratives that include seemingly contradictory values or positions that force a reader to reconstruct their meaning, thereby enabling positive judgments of narrative fidelity and the adoption of new values.
Some forms of communication are not narrative in the way that Fisher maintains. Many science fiction and fantasy novels/movies challenge rather than conform to common values.
The narrative approach does not provide a more democratic structure compared to the one imposed by the rational world paradigm. Nor does it offer a complete alternative to that paradigm.
The narrative paradigm gained attention from poststructuralist education theorists for appealing to notions of truth.
Related theories
Rhetoric theory
The narrative paradigm incorporates both the pathos and logos form of rhetoric theory. Rhetoric theory was formulated by Aristotle. He defines rhetoric as: the available means of persuasion. It includes two assumptions. Firstly that effective public speakers must consider their audience. Secondly that effective public speakers supply proofs.
Aristotle divided public speaking into three parts: the speaker, the subject and the audience. He considered the audience the most important, determining the speech’s end and object. Therefore, audience analysis, which is the process of evaluating an audience and its background is essential.
In the second assumption, Aristotle’s proof refers to the means of persuasion. And these three proof types are Ethos, Pathos, and Logos.
Ethos: The perceived character, intelligence and goodwill of a speaker as they become revealed through his or her speech.
Logos: The logic proof that speakers employ.
Pathos: The emotions that are drawn out of listeners.
There are three modes of ethos
Phronesis: practical wisdom
Arete: moral character
Eunoia: goodwill
Situation models
When people experience a story, the comprehension phase is where they form a mental representation of the text. Such a mental image is called a situation model. Situation models are representations of the state of affairs described in a document rather than of the text itself. Much of the research suggests that observers behave as though they are in the story rather than outside of it. This supports Fisher’s model that narrative components backed by good reasons are related to elements in situation models.
Space
Situation models represent relevant aspects of the narrative's environment. Objects that are spatially close to observers are generally more relevant than more distant objects. The same holds for situation models. Observers are similarly slower to recognize words denoting objects distant from a protagonist than those close to the protagonist. When observers have extensive knowledge of the spatial layout of the story setting (e.g., a building), they update their representations according to the location and goals of the protagonist. They have the fastest mental access to the room that the protagonist is close to. For example, they can more readily say whether two objects are in the same room if the room mentioned is close to the protagonist. The interpretation of the meaning of a verb denoting the movement of people or objects in space, such as to approach, depends on their situation models. The interpretation of observers also depends on the size of the landmark and the speed of the figure. Observers behave as if they are actually present in the situation.
Goals and causation
In one study observers recognized goals yet to be accomplished by the protagonist more quickly than goals that had just been accomplished. When Keefe and McDaniel presented subjects with sentences such as "after standing through a 3-hour debate, the tired speaker walked over to his chair (and sat down)" and then with probe words (e.g., "sat"). Subjects took about the same amount of time to name sat when the clause about the speaker sitting down was omitted and when it was included. Moreover, naming times were significantly faster in both of these conditions than in a control condition, in which it was implied that the speaker remained standing.
References
Sources
Anderson, Rob & Ross, Veronica. (2001). Questions of Communication: A Practical Introduction to Theory (3rd ed.). New York: Bedford/St. Martin's Press.
Cragan, John F., & Shields, Donald C. (1997). Understanding Communication Theory: The Communicative Forces for Human Action. Boston, MA: Allyn & Bacon.
Fisher, Walter R. (1995). "Narration, Knowledge, and the Possibility of Wisdom" in Rethinking Knowledge: Reflections Across the Disciplines (Suny Series in the Philosophy of the Social Sciences). (Fisher & Robert F. Goodman as editors). New York: State University of New York Press.
Kahneman, Daniel, Paul Slovic, and Amos Tversky, eds. Judgment under uncertainty: Heuristics and biases. New York: Cambridge University Press, 1982.
Narratology | 0.773348 | 0.983501 | 0.760589 |
Metonymy | Metonymy is a figure of speech in which a concept is referred to by the name of something closely associated with that thing or concept.
Etymology
The words metonymy and metonym come ; , a suffix that names figures of speech, .
Background
Metonymy and related figures of speech are common in everyday speech and writing. Synecdoche and metalepsis are considered specific types of metonymy. Polysemy, the capacity for a word or phrase to have multiple meanings, sometimes results from relations of metonymy. Both metonymy and metaphor involve the substitution of one term for another. In metaphor, this substitution is based on some specific analogy between two things, whereas in metonymy the substitution is based on some understood association or contiguity.
American literary theorist Kenneth Burke considers metonymy as one of four "master tropes": metaphor, metonymy, synecdoche, and irony. He discusses them in particular ways in his book A Grammar of Motives. Whereas Roman Jakobson argued that the fundamental dichotomy in trope was between metaphor and metonymy, Burke argues that the fundamental dichotomy is between irony and synecdoche, which he also describes as the dichotomy between dialectic and representation, or again between reduction and perspective.
In addition to its use in everyday speech, metonymy is a figure of speech in some poetry and in much rhetoric. Greek and Latin scholars of rhetoric made significant contributions to the study of metonymy.
Meaning relationships
Metonymy takes many different forms.
Synecdoche uses a part to refer to the whole, or the whole to refer to the part.
Metalepsis uses a familiar word or a phrase in a new context. For example, "lead foot" may describe a fast driver; lead is proverbially heavy, and a foot exerting more pressure on the accelerator causes a vehicle to go faster (in this context unduly so). The figure of speech is a "metonymy of a metonymy".
Many cases of polysemy originate as metonyms: for example, "chicken" means the meat as well as the animal; "crown" for the object, as well as the institution.
Versus metaphor
Metonymy works by the contiguity (association) between two concepts, whereas the term "metaphor" is based upon their analogous similarity. When people use metonymy, they do not typically wish to transfer qualities from one referent to another as they do with metaphor. There is nothing press-like about reporters or crown-like about a monarch, but "the press" and "the crown" are both common metonyms.
Some uses of figurative language may be understood as both metonymy and metaphor; for example, the relationship between "a crown" and a "king" could be interpreted metaphorically (i.e., the king, like his gold crown, could be seemingly stiff yet ultimately malleable, over-ornate, and consistently immobile). In the phrase "lands belonging to the crown", the word "crown" is a metonymy. The reason is that monarchs by and large indeed wear a crown, physically. In other words, there is a pre-existent link between "crown" and "monarchy". On the other hand, when Ghil'ad Zuckermann argues that the Israeli language is a "phoenicuckoo cross with some magpie characteristics", he is using metaphors. There is no physical link between a language and a bird. The reason the metaphors "phoenix" and "cuckoo" are used is that on the one hand hybridic "Israeli" is based on Hebrew, which, like a phoenix, rises from the ashes; and on the other hand, hybridic "Israeli" is based on Yiddish, which like a cuckoo, lays its egg in the nest of another bird, tricking it to believe that it is its own egg. Furthermore, the metaphor "magpie" is employed because, according to Zuckermann, hybridic "Israeli" displays the characteristics of a magpie, "stealing" from languages such as Arabic and English.
Two examples using the term "fishing" help clarify the distinction. The phrase "to fish pearls" uses metonymy, drawing from "fishing" the idea of taking things from the ocean. What is carried across from "fishing fish" to "fishing pearls" is the domain of metonymy. In contrast, the metaphorical phrase "fishing for information" transfers the concept of fishing into a new domain. If someone is "fishing" for information, we do not imagine that the person is anywhere near the ocean; rather, we transpose elements of the action of fishing (waiting, hoping to catch something that cannot be seen, probing, and most importantly, trying) into a new domain (a conversation). Thus, metaphors work by presenting a target set of meanings and using them to suggest a similarity between items, actions, or events in two domains, whereas metonymy calls up or references a specific domain (here, removing items from the sea).
Sometimes, metaphor and metonymy may both be at work in the same figure of speech, or one could interpret a phrase metaphorically or metonymically. For example, the phrase "lend me your ear" could be analyzed in a number of ways. One could imagine the following interpretations:
Analyze "ear" metonymically first – "ear" means "attention" (because people use ears to pay attention to each other's speech). Now, when we hear the phrase "Talk to him; you have his ear", it symbolizes he will listen to you or that he will pay attention to you. Another phrase "lending an ear (attention)", we stretch the base meaning of "lend" (to let someone borrow an object) to include the "lending" of non-material things (attention), but, beyond this slight extension of the verb, no metaphor is at work.
Imagine the whole phrase literally – imagine that the speaker literally borrows the listener's ear as a physical object (and the person's head with it). Then the speaker has temporary possession of the listener's ear, so the listener has granted the speaker temporary control over what the listener hears. The phrase "lend me your ear" is interpreted to metaphorically mean that the speaker wants the listener to grant the speaker temporary control over what the listener hears.
First, analyze the verb phrase "lend me your ear" metaphorically to mean "turn your ear in my direction", since it is known that, literally lending a body part is nonsensical. Then, analyze the motion of ears metonymically – we associate "turning ears" with "paying attention", which is what the speaker wants the listeners to do.
It is difficult to say which analysis above most closely represents the way a listener interprets the expression, and it is possible that different listeners analyse the phrase in different ways, or even in different ways at different times. Regardless, all three analyses yield the same interpretation. Thus, metaphor and metonymy, though different in their mechanism, work together seamlessly.
Examples
Here are some broad kinds of relationships where metonymy is frequently used:
Containment: When one thing contains another, it can frequently be used metonymically, as when "dish" is used to refer not to a plate but to the food it contains, or as when the name of a building is used to refer to the entity it contains, as when "the White House" or "the Pentagon" are used to refer to the Administration of the United States, or the U.S. Department of Defense, respectively.
A physical item, place, or body part used to refer to a related concept, such as "the bench" for the judicial profession, "stomach" or "belly" for appetite or hunger, "mouth" for speech, being "in diapers" for infancy, "palate" for taste, "the altar" or "the aisle" for marriage, "hand" for someone's responsibility for something ("he had a hand in it"), "head" or "brain" for mind or intelligence, or "nose" for concern about someone else's affairs, (as in "keep your nose out of my business"). A reference to Timbuktu, as in "from here to Timbuktu," usually means a place or idea is too far away or mysterious. Metonymy of objects or body parts for concepts is common in dreams.
Tools/instruments: Often a tool is used to signify the job it does or the person who does the job, as in the phrase "his Rolodex is long and valuable" (referring to the Rolodex instrument, which keeps contact business cards, meaning he has a lot of contacts and knows many people). Also "the press" (referring to the printing press), or as in the proverb, "The pen is mightier than the sword."
Product for process: This is a type of metonymy where the product of the activity stands for the activity itself. For example, in "The book is moving right along," the book refers to the process of writing or publishing.
Punctuation marks often stand metonymically for a meaning expressed by the punctuation mark. For example, "He's a big question mark to me" indicates that something is unknown. In the same way, 'period' can be used to emphasise that a point is concluded or not to be challenged.
Synecdoche: A part of something is often used for the whole, as when people refer to "head" of cattle or assistants are referred to as "hands." An example of this is the Canadian dollar, referred to as the loonie for the image of a bird on the one-dollar coin. United States one hundred-dollar bills are often referred to as "Bens", "Benjamins" or "Franklins" because they bear a portrait of Benjamin Franklin. Also, the whole of something is used for a part, as when people refer to a municipal employee as "the city" or police officers as "the law".
Toponyms: A country's capital city or some location within the city is frequently used as a metonym for the country's government, such as Washington, D.C., in the United States; Ottawa in Canada; Rome in Italy; Paris in France; Tokyo in Japan; New Delhi in India; London in the United Kingdom; Moscow in Russia etc. Perhaps the oldest such example is "Pharaoh" which originally referred to the residence of the King of Egypt but by the New Kingdom had come to refer to the king himself. Similarly, other important places, such as Wall Street, K Street, Madison Avenue, Silicon Valley, Hollywood, Vegas, and Detroit are commonly used to refer to the industries that are located there (finance, lobbying, advertising, high technology, entertainment, gambling, and motor vehicles, respectively). Such usage may persist even when the industries in question have moved elsewhere, for example, Fleet Street continues to be used as a metonymy for the British national press, though many national publications are no longer headquartered on the street of that name.
Places and institutions
A place is often used as a metonym for a government or other official institutions, for example, Brussels for the institutions of the European Union, The Hague for the International Court of Justice or International Criminal Court, Nairobi for the government of Kenya, the Kremlin for the Russian presidency, Chausseestraße and Pullach for the German Federal Intelligence Service, Number 10, Downing Street or Whitehall for the prime minister of the United Kingdom and the UK civil service, the White House and Capitol Hill for the executive and legislative branches, respectively, of the United States federal government, Foggy Bottom for the U.S. State Department, Langley for the Central Intelligence Agency, Quantico for either the Federal Bureau of Investigation academy and forensic laboratory or the Marine Corps base of the same name, Malacañang for the President of the Philippines, their advisers and Office of the President, "La Moncloa" for the Prime Minister of Spain, and Vatican for the pope, Holy See and Roman Curia. Other names of addresses or locations can become convenient shorthand names in international diplomacy, allowing commentators and insiders to refer impersonally and succinctly to foreign ministries with impressive and imposing names as (for example) the Quai d'Orsay, the Wilhelmstrasse, the Kremlin, and the Porte.
A place (or places) can represent an entire industry. For instance: Wall Street, used metonymically, can stand for the entire U.S. financial and corporate banking sector; K Street for Washington, D.C.'s lobbying industry or lobbying in the United States in general; Hollywood for the U.S. film industry, and the people associated with it; Broadway for the American commercial theatrical industry; Madison Avenue for the American advertising industry; and Silicon Valley for the American technology industry. The High Street (of which there are over 5,000 in Britain) is a term commonly used to refer to the entire British retail sector. Common nouns and phrases can also be metonyms: "red tape" can stand for bureaucracy, whether or not that bureaucracy uses actual red tape to bind documents. In Commonwealth realms, The Crown is a metonym for the state in all its aspects.
In recent Israeli usage, the term "Balfour" came to refer to the Israeli Prime Minister's residence, located on Balfour Street in Jerusalem, to all the streets around it where demonstrations frequently take place, and also to the Prime Minister and his family who live in the residence.
Rhetoric in ancient history
Western culture studied poetic language and deemed it to be rhetoric. A. Al-Sharafi supports this concept in his book Textual Metonymy, "Greek rhetorical scholarship at one time became entirely poetic scholarship." Philosophers and rhetoricians thought that metaphors were the primary figurative language used in rhetoric. Metaphors served as a better means to attract the audience's attention because the audience had to read between the lines in order to get an understanding of what the speaker was trying to say. Others did not think of metonymy as a good rhetorical method because metonymy did not involve symbolism. Al-Sharafi explains, "This is why they undermined practical and purely referential discourse because it was seen as banal and not containing anything new, strange or shocking."
Greek scholars contributed to the definition of metonymy. For example, Isocrates worked to define the difference between poetic language and non-poetic language by saying that, "Prose writers are handicapped in this regard because their discourse has to conform to the forms and terms used by the citizens and to those arguments which are precise and relevant to the subject-matter." In other words, Isocrates proposes here that metaphor is a distinctive feature of poetic language because it conveys the experience of the world afresh and provides a kind of defamiliarisation in the way the citizens perceive the world. Democritus described metonymy by saying, "Metonymy, that is the fact that words and meaning change." Aristotle discussed different definitions of metaphor, regarding one type as what we know to be metonymy today.
Latin scholars also had an influence on metonymy. The treatise Rhetorica ad Herennium states metonymy as, "the figure which draws from an object closely akin or associated an expression suggesting the object meant, but not called by its own name." The author describes the process of metonymy to us saying that we first figure out what a word means. We then figure out that word's relationship with other words. We understand and then call the word by a name that it is associated with. "Perceived as such then metonymy will be a figure of speech in which there is a process of abstracting a relation of proximity between two words to the extent that one will be used in place of another." Cicero viewed metonymy as more of a stylish rhetorical method and described it as being based on words, but motivated by style.
Jakobson, structuralism and realism
Metonymy became important in French structuralism through the work of Roman Jakobson. In his 1956 essay "The Metaphoric and Metonymic Poles", Jakobson relates metonymy to the linguistic practice of [syntagmatic] combination and to the literary practice of realism. He explains:
The primacy of the metaphoric process in the literary schools of Romanticism and symbolism has been repeatedly acknowledged, but it is still insufficiently realized that it is the predominance of metonymy which underlies and actually predetermines the so-called 'realistic' trend, which belongs to an intermediary stage between the decline of Romanticism and the rise of symbolism and is opposed to both. Following the path of contiguous relationships, the realistic author metonymically digresses from the plot to the atmosphere and from the characters to the setting in space and time. He is fond of synecdochic details. In the scene of Anna Karenina's suicide Tolstoy's artistic attention is focused on the heroine's handbag; and in War and Peace the synecdoches "hair on the upper lip" or "bare shoulders" are used by the same writer to stand for the female characters to whom these features belong.
Jakobson's theories were important for Claude Lévi-Strauss, Roland Barthes, Jacques Lacan, and others.
Dreams can use metonyms.
Art
Metonyms can also be wordless. For example, Roman Jakobson argued that cubist art relied heavily on nonlinguistic metonyms, while surrealist art relied more on metaphors.
Lakoff and Turner argued that all words are metonyms: "Words stand for the concepts they express." Some artists have used actual words as metonyms in their paintings. For example, Miró's 1925 painting "Photo: This is the Color of My Dreams" has the word "photo" to represent the image of his dreams. This painting comes from a series of paintings called peintures-poésies (paintings-poems) which reflect Miró's interest in dreams and the subconscious and the relationship of words, images, and thoughts. Picasso, in his 1911 painting "Pipe Rack and Still Life on Table" inserts the word "Ocean" rather than painting an ocean: These paintings by Miró and Picasso are, in a sense, the reverse of a rebus: the word stands for the picture, instead of the picture standing for the word.
See also
-onym
Antonomasia
Deferred reference
Eggcorn
Eponym
Enthymeme
Euphemism by comparison
Generic trademark
Kenning
List of metonyms
Meronymy
Newspeak
Pars pro toto
Simile
Slang
Sobriquet
Social stereotype
Synecdoche
Totum pro parte
References
Citations
Sources
Further reading
Figures of speech
Narrative techniques
Semantics
Tropes by type | 0.761433 | 0.998888 | 0.760586 |
Ansoff matrix | The Ansoff matrix is a strategic planning tool that provides a framework to help executives, senior managers, and marketers devise strategies for future business growth. It is named after Russian American Igor Ansoff, an applied mathematician and business manager, who created the concept.
Growth strategies
Ansoff, in his 1957 paper, "Strategies for Diversification", provided a definition for product-market strategy as "a joint statement of a product line and the corresponding set of missions which the products are designed to fulfill". He describes four growth alternatives for growing an organization in existing or new markets, with existing or new products. Each alternative poses differing levels of risk for an organization.
Market penetration
Market penetration is a growth strategy where an organization aims to expand using its existing offerings (products and services) within current markets. In simpler terms, it seeks to increase its market share in the existing market landscape. It involves attracting new customers, retaining existing ones, or acquiring competitors to capture more of the existing market. To achieve increased sales for its current products, the company adopts more assertive promotion and distribution strategies.
This can be accomplished by:
Adjusting pricing strategies to boost sales volumes.
Increasing marketing and promotion efforts to attract new customers.
Acquiring competitors to increase market share.
Improving product quality to encourage repeat purchase.
Market penetration is generally considered the least risky of the four options, as it leverages the company's established strengths and market knowledge.
Market development
In a market development strategy, an organization tries to expand into new markets, geographies or countries. It does not require significant investment in R&D or product development and the management team can leverage existing products and take them to a different market.
This can be accomplished by:
Targeting different customer segments: Explore beyond current customer base, for instance, consider industrial buyers if product was previously sold to households.
Venturing into new areas or regions: Identify untapped regions within the country by expanding geographically.
Exploring foreign markets: Enter international markets to reach new customers in new geographies.
This strategy is moderately risky by virtue of the fact that they're selling a products with proven strategies.
Product development
In a product development strategy, a company tries to create new products and services targeted at its existing markets to achieve growth. This strategy tries to leverage an existing brand's reputation and customer loyalty by offering them new products and services that address evolving needs or capitalize on new trends. To implement a product development strategy well, businesses should:
Invest in research and development to create products that address changing customer needs.
Gathering customer feedback for designing products and services with their desired features.
Collaborate with suppliers and distributors, to ensure a smooth and efficient supply chain.
Develop a strong value proposition and marketing strategies to generate interest and demand.
Product development is considered riskier than market penetration and a similar risk as market development.
Diversification
In diversification an organization tries to grow its market share by introducing new offerings in new markets. Unlike other strategies that build upon existing strengths, diversification requires venturing into uncharted territory, where the organization may have little or no prior experience. It is considered the riskiest strategy because it requires both product and market development. Introducing any product into a new market involves a lot of research. If the new product does not appeal to the local tastes, the business can face heavy loss, hence this approach is more suitable for large multinational corporations.
Types of diversification can broadly be categorized as:
Concentric diversification: Introducing a similar product within the existing product line with the purpose of leveraging existing expertise to expand the product range.
Horizontal diversification: Introducing an unrelated new product alongside existing offerings with the objective of reaching new customer segments and reducing dependence on a single category.
Conglomerate diversification: Entering entirely different markets with unrelated products. Typically done to achieve financial stability by diversifying across diverse industries.
Uses
The Ansoff matrix is a useful tool for organizations wanting to identify and explore their growth options. Although the risk varies between quadrants, with diversification being the riskiest, it can be argued that if an organization diversifies its offering successfully into multiple unrelated markets then, in fact, its overall portfolio risk is lowered.
An approach to personal career development has also been developed using the matrix, with expert development (same industry, same skills) corresponding to market penetration, industry transfer to market development, functional skill development matching to product development and retraining matching to diversification.
Criticisms
Isolation challenges
Used by itself, the Ansoff matrix could be misleading. It does not take into account the activities of competitors and the ability for competitors to counter moves into other industries. It also fails to consider the challenges and risks of changes to business-as-usual activities. An organization hoping to move into new markets or create new products (or both) must consider whether they possess transferable skills, flexible structures, and agreeable stakeholders.
Logical consistency challenges
The logic of the Ansoff matrix has been questioned. The logical issues pertain to interpretations about newness. If one assumes a new product really is new to the firm, in many cases a new product will simultaneously take the firm into a new, unfamiliar market. In that case, one of the Ansoff quadrants, diversification, is redundant. Alternatively, if a new product does not necessarily take the firm into a new market, then the combination of new products into new markets does not always equate to diversification, in the sense of venturing into a completely unknown business.
See also
Business model
Business triage
First-mover advantage
Marketing
Market segmentation
References
Business terms | 0.764128 | 0.995343 | 0.760569 |
Boyer's model of scholarship | Boyer's model of scholarship is an academic model advocating expansion of the traditional definition of scholarship and research into four types of scholarship. It was introduced in 1990 by Ernest Boyer. According to Boyer, traditional research, or the scholarship of discovery, had been the center of academic life and crucial to an institution's advancement, but it needed to be broadened and made more flexible to include not only the new social and environmental challenges beyond the campus but also the reality of contemporary life. His vision was to change the research mission of universities by introducing the idea that scholarship needed to be redefined.
He proposed that scholarship include these four different categories:
The scholarship of discovery that includes original research that advances knowledge (e.g., basic research);
The scholarship of integration that involves synthesis of information across disciplines, across topics within a discipline, or across time (e.g., interprofessional education, or science communication);
The scholarship of application (also later called the scholarship of engagement) that goes beyond the service duties of a faculty member to those within or outside the University and involves the rigor and application of disciplinary expertise, with results that can be shared with and/or evaluated by peers (i.e., Cooperative State Research, Education, and Extension Service, or science diplomacy); and
The scholarship of teaching and learning that involves the systematic study of teaching and learning processes. It differs from scholarly teaching in that it requires the work be made public, made available for peer review and critique according to accepted standards, and should be reproducible and extensible by other scholars.
Boyer's model has been embraced across academia with occasional refinement, such as specific applications for different disciplines.
References
Further reading
Boyer, E. L. (1996). From scholarship reconsidered to scholarship assessed. Quest, 48(2), 129-139.
Glassick, C. E. (2000). Boyer's expanded definitions of scholarship, the standards for assessing scholarship, and the elusiveness of the scholarship of teaching. Academic Medicine, 75(9), 877-880.
Klecka, Cari L. (2009). Visions for teacher educators: perspectives on the Association of Teacher Educators' standards. Rowman & Littlefield Education. p. 80. .
Metzler, M. W. (1994). Scholarship reconsidered for the professoriate of 2010. Quest, 46(4), 440-455.
Stewart, Trae; Nicole Webster (2010). Problematizing Service-Learning: Critical Reflections for Development and Action. Information Age Publishing. p. 327. .
Education theory
1990 introductions | 0.775142 | 0.981155 | 0.760535 |
Gaze | In critical theory, philosophy, sociology, and psychoanalysis, the gaze (French: le regard), in the figurative sense, is an individual's (or a group's) awareness and perception of other individuals, other groups, or oneself. Since the 20th century, the concept and the social applications of the gaze have been defined and explained by phenomenologist, existentialist, and post-structuralist philosophers. Jean-Paul Sartre described the gaze (or the look) in Being and Nothingness (1943). Michel Foucault, in Discipline and Punish: The Birth of the Prison (1975), developed the concept of the gaze to illustrate the dynamics of socio-political power relations and the social dynamics of society's mechanisms of discipline. Jacques Derrida, in The Animal That Therefore I Am (More to Come) (1997), elaborated upon the inter-species relations that exist among human beings and other animals, which are established by way of the gaze.
Psychoanalysis
In Lacanian psychoanalytic theory, Lacan's view on the gaze changes throughout the course of his work. Initially, the concept of the gaze was used by Lacan through his psychoanalytic work on the mirror stage. The mirror stage occurs when a child encountering a mirror learns that they have an external appearance. Theoretically, this is where the child begins their entrance into culture and the world. The child enters language and culture through establishing an ideal image of themselves in the mirror. This image is someone the child can aspire to be like and work towards. The role of the ideal ego or self can also be filled by other people in their lives such as parents, siblings, teachers etc.
In his later essays however, Lacan refers to the gaze as the anxious feeling that one is being watched. More specifically, it is when the object that one is viewing is somehow looking back at the subject on its own terms. The psychological effect upon the person subjected to the gaze is a loss of autonomy upon becoming aware that they are a visible object. Lacan extrapolated that the gaze and the effects of the gaze might be produced by an inanimate object, and thus a person's awareness of any object can induce the self-awareness of also being an object in the material world of reality. The philosophic and psychologic importance of the gaze is in the meeting of the face and the gaze, because only there do people exist for one another.
Systems of power
The gaze can be understood in psychological terms: "to gaze implies more than to look at – it signifies a psychological relationship of power, in which the gazer is superior to the object of the gaze." In Practices of Looking: An Introduction to Visual Culture (2009), Marita Sturken and Lisa Cartwright said that "the gaze is [conceptually] integral to systems of power, and [to] ideas about knowledge"; that to practice the gaze is to enter a personal relationship with the person being looked at. Foucault's concepts of panopticism, of the power/knowledge binary, and of biopower address the modes of personal self-regulation that a person practices when under surveillance; the modification of personal behaviour by way of institutional surveillance. In 'The politics of the gaze: Between Foucault and Merleau-Ponty', Nick Crossley (1993) argued that Foucault's account of the Panopticon and Panoptic power has deficiencies that Merleau-Ponty's philosophy allows us to overcome.
In The Birth of the Clinic (1963), Michel Foucault first applied the medical gaze to conceptually describe and explain the act of looking, as part of the process of medical diagnosis; the unequal power dynamics between doctors and patients; and the cultural hegemony of intellectual authority that a society grants to medical knowledge and medicine men. In Discipline and Punish: The Birth of the Prison (1975), Foucault develops the gaze as an apparatus of power based upon the social dynamics of power relations, and the social dynamics of disciplinary mechanisms, such as surveillance and personal self-regulation, as practices in a prison and in a school.
Male gaze
The concept of the "male gaze" was first used by the English art critic John Berger in Ways of Seeing, a series of films for the BBC aired in January 1972, and later a book, as part of his analysis of the treatment of the nude in European painting. Berger described the difference between how men and women view and are viewed in art and in society. He asserts that men are placed into the role as the watcher and women are to be looked at. Laura Mulvey, a British film critic and feminist, similarly critiqued traditional media representations of the female character in cinema.
In her 1975 essay Visual Pleasure and Narrative Cinema, Mulvey discusses the association between activity and passivity to gender. Essentially, Mulvey argues that masculinity is related to the active, whereas femininity is related to the passive. Furthermore, she highlights heterosexual desire and identity and how they are related to the roles assigned to masculinity and femininity. This puts the viewer of a film into the role of the active masculine and coaxes the viewer to desire the passive feminine. This left no room for female activity and desire in the stereotypically masculine role. Hollywood films played to the models of voyeurism and scopophilia. The concept has subsequently been influential in feminist film theory and media studies. Berger, Mulvey as well as Foucault also all linked the looming act of the gaze inextricably to power.
Female gaze
The term "female gaze" was created as a response to the proposed concept of the male gaze as coined by Laura Mulvey. In particular, it is a rebellion against the viewership censored to an only masculine lens and feminine desire regardless of the viewer's gender identity or sexual orientation. In essence, the forced desire of femininity enacts in the erasure of female desire and sexuality. In Judith Butler's 1990 book Gender Trouble, she proposed the idea of the female gaze as a way in which men choose to perform their masculinity by using women as the ones who force men into self-regulation. Film director Deborah Kampmeier rejected the idea of the female gaze in preference for the female experience. She stated, "(F)or me personally, it’s not (about) a female gaze. It’s the female experience. I don't gaze, I actually move through the world, feeling the world emotionally and sensorily and in my body."
Objectifying gaze
The feminist Objectification theory was first proposed by Barbara Fredrickson and Tomi-Ann Roberts in 1997. Objectification theory is a framework that attempts to bring to light the lived experiences of women in particular that are under the lens of sexual objectification. The theory is primarily focused through a heterosexual perspective. According to Fredrickson and Roberts, sexual objectification occurs as the experience of being treated as "a body (or collection of body parts) valued predominantly for its use to (or consumption by) others." Stripping one of their own bodily agency and sexuality, as well as humanity.
Fredrickson and Roberts stated that sexual objectification or the objectifying gaze occurs in three arenas: Interpersonal or social encounters, visual media that depicts social encounters, and lastly visual media that depict bodies. Interpersonal and social encounters entails the everyday lives and interactions with other people. The objectifying gaze in this context comes from simply looking at a person as an object or only for sexual pleasure. The two areas in visual media depend on media portrayals of gender. Due to the heavy media centered world in western culture, individuals feed on the output of media and allow it to influence one's life, opinions, and perceptions. The two differ in how the media portrays the different contexts in which objectification occurs. The first occurs in media outlets such as advertisements which depict social situations in itself, and the second occurs in media platforms such as social media in which bodies/body parts can be put on display. The third context also aligns the viewer with the objectifying gaze.
Objectification theory and the objectifying gaze also enables a state or trait of self objectification. Self objectification occurs when one adapts to living in a world where the objectifying gaze is constantly put on them and normalized. The individual that the objectifying gaze is applied to then begins to view themselves in the third party view of that objectifying gaze. The purpose of self objectification is a response to the anticipation to be objectified. The individual may then restrict social movement or behaviour in such a way to display themselves as desirable. This is simply a strategy used in effort to gain back some social control in response to the loss of control that comes with the sexualized or objectifying gaze. For example, a woman may portray a feminized version of herself in response to the objectifying gaze.
Although the original objectification theory mainly focuses on the implications and theories surrounding women in the spotlight of the objectifying gaze, with the use of mass media men are becoming increasingly objectified as well.
Imperial gaze
E. Ann Kaplan has introduced the post-colonial concept of the imperial gaze, in which the observed find themselves defined in terms of the privileged observer's own set of value-preferences. From the perspective of the colonised, the imperial gaze infantilizes and trivializes what it falls upon, asserting its command and ordering function as it does so.
Kaplan comments: "The imperial gaze reflects the assumption that the white western subject is central much as the male gaze assumes the centrality of the male subject."
White gaze
Oppositional gaze
In her 1992 essay titled "The Oppositional Gaze: Black Female Spectatorship", bell hooks counters Laura Mulvey's notion of the (male) gaze by introducing the oppositional gaze of Black women. This concept exists as the reciprocal of the normative white spectator gaze. As Mulvey's essay contextualizes the (male) gaze and its objectification of white women, hooks' essay opens "oppositionality [as] a key paradigm in the feminist analysis of the 'gaze' and of scopophilic regimes in Western culture".
The oppositional gaze remains a critique of rebellion due to the sustained and deliberate misrepresentation of Black women in cinema as characteristically Mammy, Jezebel or Sapphire.
Postcolonial gaze
First referred to by Edward Said as "orientalism", the term "post-colonial gaze" is used to explain the relationship that colonial powers extended to people of colonized countries. Placing the colonized in a position of the "other" helped to shape and establish the colonial's identity as being the powerful conqueror, and acted as a constant reminder of this idea. The postcolonial gaze "has the function of establishing the subject/object relationship ... it indicates at its point of emanation the location of the subject, and at its point of contact the location of the object". In essence, this means that the colonizer/colonized relationship provided the basis for the colonizer's understanding of themselves and their identity. The role of the appropriation of power is central to understanding how colonizers influenced the countries that they colonized, and is deeply connected to the development of post-colonial theory. Utilizing postcolonial gaze theory allows formerly colonized societies to overcome the socially constructed barriers that often prohibit them from expressing their true cultural, social, economic, and political rights.
Male tourist gaze
The tourism image is created through cultural and ideological constructions and advertising agencies that have been male dominated. What is represented by the media assumes a specific type of tourist: white, Western, male, and heterosexual, privileging the gaze of the "master subject" over others. This is the representation of the typical tourist because those behind the lens, the image, and creators are predominantly male, white, and Western. Those that do not fall into this category are influenced by its supremacy. Through these influences female characteristics such as youth, beauty, sexuality, or the possession of a man are desirable while the prevalence of stereotypes consisting of submissive and sensual women with powerful "macho" men in advertising are projected.
See also
References
Further reading
Armstrong, Carol and de Zegher, Catherine, Women Artists at the Millennium. MIT Press, October Books, 2006.
de Zegher, Catherine, Inside the Visible. MIT Press, 1996.
Ettinger, Bracha, "The Matrixial Gaze" (1995), reprinted as Ch. 1 in: The Matrixial Borderspace. University of Minnesota Press, 2006.
Felluga, Dino. "Modules on Lacan: On the Gaze." Introductory Guide to Critical Theory — see external links.
Florence, Penny and Pollock, Griselda, Looking back to the Future. G & B Arts, 2001.
Gardner-McTaggart, A. (Forthcoming), International Capital, International Schools, Leadership and Christianity, Globalisation Societies and Education. Taylor and Francis.
Jacobsson, Eva-Maria: A Female Gaze? (1999) — see external links.
Kress, Gunther & Theo van Leeuwen: Reading Images: The Grammar of Visual Design. (1996).
Lacan, Jacques:Seminar XI: The Four Fundamental Concepts of Psychoanalysis. NY & London, W.W. Norton and Co., 1978.
Lacan, Jacques: Seminar One: Freud's Papers On Technique (1988).
Lutz, Catherine & Jane Collins: The Photograph as an Intersection of Gazes: The Example of National Geographic (1994). In: Visualizing Theory: Selected Essays from V.A.R. 1990–1994. Edited by Lucien Taylor. New York: Routledge. pp. 363–384.
Mulvey, Laura: Visual Pleasure and Narrative Cinema (1975, 1992).
Notes on The Gaze (1998) — see external links.
Pollock, Griselda, Modernity and the Spaces of Femininity. Routledge, 1988.
Pollock, Griselda (ed.), Psychoanalysis and the Image. Blackwell, 2006.
Sturken, Marita and Lisa Cartwright. Practices of Looking: an introduction to visual culture. Oxford University Press, 2009. p. 94, 103.
Paul, Nalini: The Female Gaze — see external links.
Schroeder, Jonathan E: SSRN.com Consuming Representation: A Visual Approach to Consumer Research.
Theory, Culture and Society, Volume 21, Number 1, 2004.
External links
Notes on The Gaze
Robert Doisneau, Un regard Oblique, 1948 — photograph illustrating gaze
The Male Gaze , with photographs of several advertisements
Aux Fenêtres de l'âme (Windows of the Soul), a Ron Padova film
Feminist theory
Concepts in film theory
Human communication
Psychoanalytic terminology
Jacques Lacan
Post-structuralism
Structuralism
Existentialist concepts
Concepts in aesthetics
Postmodern feminism | 0.766417 | 0.992287 | 0.760505 |
Modernism | Modernism was an early 20th-century movement in literature, visual arts, and music that emphasized experimentation, abstraction, and subjective experience. Philosophy, politics, architecture, and social issues were all aspects of this movement. Modernism centered around beliefs in a "growing alienation" from prevailing "morality, optimism, and convention" and a desire to change how "human beings in a society interact and live together".
The modernist movement emerged during the late 19th century in response to significant changes in Western culture, including secularization and the growing influence of science. It is characterized by a self-conscious rejection of tradition and the search for newer means of cultural expression. Modernism was influenced by widespread technological innovation, industrialization, and urbanization, as well as the cultural and geopolitical shifts that occurred after World War I. Artistic movements and techniques associated with modernism include abstract art, literary stream-of-consciousness, cinematic montage, musical atonality and twelve-tonality, modernist architecture, and urban planning.
Modernism took a critical stance towards the Enlightenment concept of rationalism. The movement also rejected the concept of absolute originality — the idea of "creation from nothingness" — upheld in the 19th century by both realism and Romanticism, replacing it with techniques of collage, reprise, incorporation, rewriting, recapitulation, revision, and parody. Another feature of modernism was reflexivity about artistic and social convention, which led to experimentation highlighting how works of art are made as well as the material from which they are created. Debate about the timeline of modernism continues, with some scholars arguing that it evolved into late modernism or high modernism. Postmodernism, meanwhile, rejects many of the principles of modernism.
Overview and definition
Modernism was a cultural movement that impacted the arts as well as the broader Zeitgeist. It is commonly described as a system of thought and behavior marked by self-consciousness or self-reference, prevalent within the avant-garde of various arts and disciplines. It is also often perceived, especially in the West, as a socially progressive movement that affirms the power of human beings to create, improve, and reshape their environment with the aid of practical experimentation, scientific knowledge, or technology. From this perspective, modernism encourages the re-examination of every aspect of existence. Modernists analyze topics to find the ones they believe to be holding back progress, replacing them with new ways of reaching the same end.
According to historian Roger Griffin, modernism can be defined as a broad cultural, social, or political initiative sustained by the ethos of "the temporality of the new". Griffin believed that modernism aspired to restore a "sense of sublime order and purpose to the contemporary world, thereby counteracting the (perceived) erosion of an overarching 'nomos', or 'sacred canopy', under the fragmenting and secularizing impact of modernity". Therefore, phenomena apparently unrelated to each other such as "Expressionism, Futurism, Vitalism, Theosophy, Psychoanalysis, Nudism, Eugenics, Utopian town planning and architecture, modern dance, Bolshevism, Organic Nationalism — and even the cult of self-sacrifice that sustained the Hecatomb of the First World War — disclose a common cause and psychological matrix in the fight against (perceived) decadence." All of them embody bids to access a "supra-personal experience of reality" in which individuals believed they could transcend their mortality and eventually that they would cease to be victims of history to instead become its creators.
Modernism, Romanticism, Philosophy and Symbol
Literary modernism is often summed up in a line from W. B. Yeats: "Things fall apart; the centre cannot hold" (in 'The Second Coming'). Modernists often search for a metaphysical 'centre' but experience its collapse. (Postmodernism, by way of contrast, celebrates that collapse, exposing the failure of metaphysics, such as Jacques Derrida's deconstruction of metaphysical claims.)
Philosophically, the collapse of metaphysics can be traced back to the Scottish philosopher David Hume (1711–1776), who argued that we never actually perceive one event causing another. We only experience the 'constant conjunction' of events, and do not perceive a metaphysical 'cause'. Similarly, Hume argued that we never know the self as object, only the self as subject, and we are thus blind to our true natures. Moreover, if we only 'know' through sensory experience—such as sight, touch and feeling—then we cannot 'know' and neither can we make metaphysical claims.
Thus, modernism can be driven emotionally by the desire for metaphysical truths, while understanding their impossibility. Some modernist novels, for instance, feature characters like Marlow in Heart of Darkness or Nick Carraway in The Great Gatsby who believe that they have encountered some great truth about nature or character, truths that the novels themselves treat ironically while offering more mundane explanations. Similarly, many poems of Wallace Stevens convey a struggle with the sense of nature's significance, falling under two headings: poems in which the speaker denies that nature has meaning, only for nature to loom up by the end of the poem; and poems in which the speaker claims nature has meaning, only for that meaning to collapse by the end of the poem.
Modernism often rejects nineteenth century realism, if the latter is understood as focusing on the embodiment of meaning within a naturalistic representation. At the same time, some modernists aim at a more 'real' realism, one that is uncentered. Picasso's proto-cubist painting, Les Demoiselles d'Avignon of 1907 (see picture above), does not present its subjects from a single point of view (that of a single viewer), but instead presents a flat, two-dimensional picture plane. 'The Poet' of 1911 is similarly decentred, presenting the body from multiple points of view. As the Peggy Guggenheim Collection website puts it, 'Picasso presents multiple views of each object, as if he had moved around it, and synthesizes them into a single compound image'.
Modernism, with its sense that 'things fall apart,' can be seen as the apotheosis of romanticism, if romanticism is the (often frustrated) quest for metaphysical truths about character, nature, a higher power and meaning in the world. Modernism often yearns for a romantic or metaphysical centre, but later finds its collapse.
This distinction between modernism and romanticism extends to their respective treatments of 'symbol'. The romantics at times see an essential relation (the 'ground') between the symbol (or the 'vehicle', in I.A. Richards's terms) and its 'tenor' (its meaning)—for example in Coleridge's description of nature as 'that eternal language which thy God / Utters'. But while some romantics may have perceived nature and its symbols as God's language, for other romantic theorists it remains inscrutable. As Goethe (not himself a romantic) said, ‘the idea [or meaning] remains eternally and infinitely active and inaccessible in the image’. This was extended in modernist theory which, drawing on its symbolist precursors, often emphasizes the inscrutability and failure of symbol and metaphor. For example, Wallace Stevens seeks and fails to find meaning in nature, even if he at times seems to sense such a meaning. As such, symbolists and modernists at times adopt a mystical approach to suggest a non-rational sense of meaning.
For these reasons, modernist metaphors may be unnatural, as for instance in T.S. Eliot's description of an evening 'spread out against the sky / Like a patient etherized upon a table'. Similarly, for many later modernist poets nature is unnaturalized and at times mechanized, as for example in Stephen Oliver's image of the moon busily 'hoisting' itself into consciousness.
Origins and early history
Romanticism and realism
Modernism developed out of Romanticism's revolt against the effects of the Industrial Revolution and bourgeois values. Literary scholar Gerald Graff, argues that, "The ground motive of modernism was criticism of the 19th-century bourgeois social order and its world view; the modernists, carrying the torch of Romanticism."
While J. M. W. Turner (1775–1851), one of the most notable landscape painters of the 19th century, was a member of the Romantic movement, his pioneering work in the study of light, color, and atmosphere "anticipated the French Impressionists" and therefore modernism "in breaking down conventional formulas of representation; though unlike them, he believed that his works should always express significant historical, mythological, literary, or other narrative themes."
However, the modernists were critical of the Romantics' belief that art serves as a window into the nature of reality. They argued that since each viewer interprets art through their own subjective perspective, it can never convey the ultimate metaphysical truth that the Romantics sought. Nonetheless, the modernists did not completely reject the idea of art as a means of understanding the world. To them, it was a tool for challenging and disrupting the viewer's point of view, rather than as a direct means of accessing a higher reality.
Modernism often rejects 19th-century realism when the latter is understood as focusing on the embodiment of meaning within a naturalistic representation. Instead, some modernists aim at a more 'real' realism, one that is uncentered. For instance, Picasso's 1907 Proto-Cubist painting Les Demoiselles d'Avignon does not present its subjects from a single point of view, instead presenting a flat, two-dimensional picture plane. The Poet of 1911 is similarly decentered, presenting the body from multiple points of view. As the Peggy Guggenheim Collection comments, "Picasso presents multiple views of each object, as if he had moved around it, and synthesizes them into a single compound image."
Modernism, with its sense that "things fall apart," is often seen as the apotheosis of Romanticism. As August Wilhelm Schlegel, an early German Romantic, described it, while Romanticism searches for metaphysical truths about character, nature, higher power, and meaning in the world, modernism, although yearning for such a metaphysical center, only finds its collapse.
The early 19th century
In the context of the Industrial Revolution (~1760–1840), influential innovations included steam-powered industrialization, especially the development of railways starting in Britain in the 1830s, and the subsequent advancements in physics, engineering, and architecture they led to. A major 19th-century engineering achievement was the Crystal Palace, the huge cast-iron and plate-glass exhibition hall built for the Great Exhibition of 1851 in London. Glass and iron were used in a similar monumental style in the construction of major railway terminals throughout the city, including King's Cross station (1852) and Paddington Station (1854). These technological advances spread abroad, leading to later structures such as the Brooklyn Bridge (1883) and the Eiffel Tower (1889), the latter of which broke all previous limitations on how tall man-made objects could be. While such engineering feats radically altered the 19th-century urban environment and the daily lives of people, the human experience of time itself was altered with the development of the electric telegraph in 1837, as well as the adoption of "standard time" by British railway companies from 1845, a concept which would be adopted throughout the rest of the world over the next fifty years.
Despite continuing technological advances, the ideas that history and civilization were inherently progressive and that such advances were always good came under increasing attack in the 19th century. Arguments arose that the values of the artist and those of society were not merely different, but in fact oftentimes opposed, and that society's current values were antithetical to further progress; therefore, civilization could not move forward in its present form. Early in the century, the philosopher Schopenhauer (1788–1860) (The World as Will and Representation, 1819/20) called into question previous optimism. His ideas had an important influence on later thinkers, including Friedrich Nietzsche (1844–1900). Similarly, Søren Kierkegaard (1813–1855) and Nietzsche both later rejected the idea that reality could be understood through a purely objective lens, a rejection that had a significant influence on the development of existentialism and nihilism.
Around 1850, the Pre-Raphaelite Brotherhood (a group of English poets, painters, and art critics) began to challenge the dominant trends of industrial Victorian England in "opposition to technical skill without inspiration." They were influenced by the writings of the art critic John Ruskin (1819–1900), who had strong feelings about the role of art in helping to improve the lives of the urban working classes in the rapidly expanding industrial cities of Britain. Art critic Clement Greenberg described the Pre-Raphaelite Brotherhood as proto-modernists: "There the proto-modernists were, of all people, the Pre-Raphaelites (and even before them, as proto-proto-modernists, the German Nazarenes). The Pre-Raphaelites foreshadowed Manet (1832–1883), with whom modernist painting most definitely begins. They acted on a dissatisfaction with painting as practiced in their time, holding that its realism wasn't truthful enough."
Two of the most significant thinkers of the mid-19th century were biologist Charles Darwin (1809–1882), author of On the Origin of Species through Natural Selection (1859), and political scientist Karl Marx (1818–1883), author of Das Kapital (1867). Despite coming from different fields, both of their theories threatened the established order. Darwin's theory of evolution by natural selection undermined religious certainty and the idea of human uniqueness; in particular, the notion that human beings are driven by the same impulses as "lower animals" proved to be difficult to reconcile with the idea of an ennobling spirituality. Meanwhile, Marx's arguments that there are fundamental contradictions within the capitalist system and that workers are anything but free led to the formulation of Marxist theory.
The late 19th century
Art historians have suggested various dates as starting points for modernism. Historian William Everdell argued that modernism began in the 1870s when metaphorical (or ontological) continuity began to yield to the discrete with mathematician Richard Dedekind's (1831–1916) Dedekind cut and Ludwig Boltzmann's (1844–1906) statistical thermodynamics. Everdell also believed modernism in painting began in 1885–1886 with post-Impressionist artist Georges Seurat's development of Divisionism, the "dots" used to paint A Sunday Afternoon on the Island of La Grande Jatte. On the other hand, visual art critic Clement Greenberg called German philosopher Immanuel Kant (1724–1804) "the first real modernist", although he also wrote, "What can be safely called modernism emerged in the middle of the last century—and rather locally, in France, with Charles Baudelaire (1821–1867) in literature and Manet in painting, and perhaps with Gustave Flaubert (1821–1880), too, in prose fiction. (It was a while later, and not so locally, that modernism appeared in music and architecture)." The poet Baudelaire's Les Fleurs du mal (The Flowers of Evil) and the author Flaubert's Madame Bovary were both published in 1857. Baudelaire's essay "The Painter of Modern Life" (1863) inspired young artists to break away from tradition and innovate new ways of portraying their world in art.
Beginning in the 1860s, two approaches in the arts and letters developed separately in France. The first was Impressionism, a school of painting that initially focused on work done not in studios, but outdoors (en plein air). Impressionist paintings attempted to convey that human beings do not see objects, but instead see light itself. The school gathered adherents despite internal divisions among its leading practitioners and became increasingly influential. Initially rejected from the most important commercial show of the time, the government-sponsored Paris Salon, the Impressionists organized yearly group exhibitions in commercial venues during the 1870s and 1880s, timing them to coincide with the official Salon. In 1863, the Salon des Refusés, created by Emperor Napoleon III, displayed all of the paintings rejected by the Paris Salon. While most were in standard styles, but by inferior artists, the work of Manet attracted attention and opened commercial doors to the movement. The second French school was symbolism, which literary historians see beginning with Charles Baudelaire and including the later poets Arthur Rimbaud (1854–1891) with Une Saison en Enfer (A Season in Hell, 1873), Paul Verlaine (1844–1896), Stéphane Mallarmé (1842–1898), and Paul Valéry (1871–1945). The symbolists "stressed the priority of suggestion and evocation over direct description and explicit analogy," and were especially interested in "the musical properties of language."
Cabaret, which gave birth to so many of the arts of modernism, including the immediate precursors of film, may be said to have begun in France in 1881 with the opening of the Black Cat in Montmartre, the beginning of the ironic monologue, and the founding of the Society of Incoherent Arts.
The theories of Sigmund Freud (1856–1939), Krafft-Ebing and other sexologists were influential in the early days of modernism. Freud's first major work was Studies on Hysteria (with Josef Breuer, 1895). Central to Freud's thinking is the idea "of the primacy of the unconscious mind in mental life", so that all subjective reality was based on the interactions between basic drives and instincts, through which the outside world was perceived. Freud's description of subjective states involved an unconscious mind full of primal impulses, and counterbalancing self-imposed restrictions derived from social values.
The works of Friedrich Nietzsche (1844–1900) were another major precursor of modernism, with a philosophy in which psychological drives, specifically the "will to power" (Wille zur macht), were of central importance: "Nietzsche often identified life itself with 'will to power', that is, with an instinct for growth and durability." Henri Bergson (1859–1941), on the other hand, emphasized the difference between scientific, clock time and the direct, subjective human experience of time. His work on time and consciousness "had a great influence on 20th-century novelists" especially those modernists who used the "stream of consciousness" technique, such as Dorothy Richardson, James Joyce, and Virginia Woolf (1882–1941). Also important in Bergson's philosophy was the idea of élan vital, the life force, which "brings about the creative evolution of everything." His philosophy also placed a high value on intuition, though without rejecting the importance of the intellect.
Important literary precursors of modernism included esteemed writers such as Fyodor Dostoevsky (1821–1881), whose novels include Crime and Punishment (1866) and The Brothers Karamazov (1880); Walt Whitman (1819–1892), who published the poetry collection Leaves of Grass (1855–1891); and August Strindberg (1849–1912), especially his later plays, including the trilogy To Damascus 1898–1901,A Dream Play (1902) and The Ghost Sonata (1907). Henry James has also been suggested as a significant precursor to modernism in works as early as The Portrait of a Lady (1881).
Modernism emerges
1901 to 1930
Out of the collision of ideals derived from Romanticism and an attempt to find a way for knowledge to explain that which was as yet unknown, came the first wave of modernist works in the opening decade of the 20th century. Although their authors considered them to be extensions of existing trends in art, these works broke the implicit understanding the general public had of art: that artists were the interpreters and representatives of bourgeois culture and ideas. These "modernist" landmarks include the atonal ending of Arnold Schoenberg's Second String Quartet in 1908, the Expressionist paintings of Wassily Kandinsky starting in 1903, and culminating with his first abstract painting and the founding of the Blue Rider group in Munich in 1911, and the rise of fauvism and the inventions of Cubism from the studios of Henri Matisse, Pablo Picasso, Georges Braque, and others, in the years between 1900 and 1910.
An important aspect of modernism is how it relates to tradition through its adoption of techniques like reprise, incorporation, rewriting, recapitulation, revision, and parody in new forms.
T. S. Eliot made significant comments on the relation of the artist to tradition, including: "[W]e shall often find that not only the best, but the most individual parts of [a poet's] work, may be those in which the dead poets, his ancestors, assert their immortality most vigorously." However, the relationship of modernism with tradition was complex, as literary scholar Peter Child's indicates: "There were paradoxical if not opposed trends towards revolutionary and reactionary positions, fear of the new and delight at the disappearance of the old, nihilism and fanatical enthusiasm, creativity, and despair."
An example of how modernist art can apply older traditions while also incorporating new techniques can be found within the music of the composer Arnold Schoenberg. On the one hand, he rejected traditional tonal harmony, the hierarchical system of organizing works of music that had guided musical composition for at least a century and a half. Schoenberg believed he had discovered a wholly new way of organizing sound based on the use of twelve-note rows. Yet, while this was indeed a wholly new technique, its origins can be traced back to the work of earlier composers such as Franz Liszt, Richard Wagner, Gustav Mahler, Richard Strauss, and Max Reger.
In the world of art, in the first decade of the 20th century, young painters such as Pablo Picasso and Henri Matisse caused much controversy and attracted great criticism with their rejection of traditional perspective as the means of structuring paintings, though the Impressionist Claude Monet had already been innovative in his use of perspective. In 1907, as Picasso was painting , Oskar Kokoschka was writing Mörder, Hoffnung der Frauen (Murderer, Hope of Women), the first Expressionist play (produced with scandal in 1909), and Arnold Schoenberg was composing his String Quartet No.2 in F sharp minor (1908), his first composition without a tonal center.
A primary influence that led to Cubism was the representation of three-dimensional form in the late works of Paul Cézanne, which were displayed in a retrospective at the 1907 Salon d'Automne. In Cubist artwork, objects are analyzed, broken up, and reassembled in an abstract form; instead of depicting objects from one viewpoint, the artist depicts the subject from a multitude of viewpoints to represent the subject in a greater context. Cubism was brought to the attention of the general public for the first time in 1911 at the Salon des Indépendants in Paris (held 21 April – 13 June). Jean Metzinger, Albert Gleizes, Henri Le Fauconnier, Robert Delaunay, Fernand Léger and Roger de La Fresnaye were shown together in Room 41, provoking a 'scandal' out of which Cubism emerged and spread throughout Paris and beyond. Also in 1911, Kandinsky painted Bild mit Kreis (Picture with a Circle), which he later called the first abstract painting. In 1912, Metzinger and Gleizes wrote the first (and only) major Cubist manifesto, Du "Cubisme", published in time for the Salon de la Section d'Or, the largest Cubist exhibition to date. In 1912 Metzinger painted and exhibited his enchanting La Femme au Cheval (Woman with a Horse) and Danseuse au Café (Dancer in a Café). Albert Gleizes painted and exhibited his Les Baigneuses (The Bathers) and his monumental Le Dépiquage des Moissons (Harvest Threshing). This work, along with La Ville de Paris (City of Paris) by Robert Delaunay, was the largest and most ambitious Cubist painting undertaken during the pre-war Cubist period.
In 1905, a group of four German artists, led by Ernst Ludwig Kirchner, formed Die Brücke (The Bridge) in the city of Dresden. This was arguably the founding organization for the German Expressionist movement, though they did not use the word itself. A few years later, in 1911, a like-minded group of young artists formed Der Blaue Reiter (The Blue Rider) in Munich. The name came from Wassily Kandinsky's Der Blaue Reiter painting of 1903. Among their members were Kandinsky, Franz Marc, Paul Klee, and August Macke. However, the term "Expressionism" did not firmly establish itself until 1913. Though initially mainly a German artistic movement, most predominant in painting, poetry and the theatre between 1910 and 1930, most precursors of the movement were not German. Furthermore, there have been Expressionist writers of prose fiction, as well as non-German speaking Expressionist writers, and, while the movement had declined in Germany with the rise of Adolf Hitler in the 1930s, there were subsequent Expressionist works.
Expressionism is notoriously difficult to define, in part because it "overlapped with other major 'isms' of the modernist period: with Futurism, Vorticism, Cubism, Surrealism and Dada." Richard Murphy also comments: "[The] search for an all-inclusive definition is problematic to the extent that the most challenging Expressionists," such as the novelist Franz Kafka, poet Gottfried Benn, and novelist Alfred Döblin were simultaneously the most vociferous anti-Expressionists. What, however, can be said, is that it was a movement that developed in the early 20th century mainly in Germany in reaction to the dehumanizing effect of industrialization and the growth of cities, and that "one of the central means by which Expressionism identifies itself as an avant-garde movement, and by which it marks its distance to traditions and the cultural institution as a whole is through its relationship to realism and the dominant conventions of representation." More explicitly: the Expressionists rejected the ideology of realism.
There was a concentrated Expressionist movement in early 20th-century German theater, of which Georg Kaiser and Ernst Toller were the most famous playwrights. Other notable Expressionist dramatists included Reinhard Sorge, Walter Hasenclever, Hans Henny Jahnn, and Arnolt Bronnen. They looked back to Swedish playwright August Strindberg and German actor and dramatist Frank Wedekind as precursors of their dramaturgical experiments. Oskar Kokoschka's Murderer, the Hope of Women was the first fully Expressionist work for the theater, which opened on 4 July 1909 in Vienna. The extreme simplification of characters to mythic types, choral effects, declamatory dialogue and heightened intensity would become characteristic of later Expressionist plays. The first full-length Expressionist play was The Son by Walter Hasenclever, which was published in 1914 and first performed in 1916.
Futurism is another modernist movement. In 1909, the Parisian newspaper Le Figaro published F. T. Marinetti's first manifesto. Soon afterward, a group of painters (Giacomo Balla, Umberto Boccioni, Carlo Carrà, Luigi Russolo, and Gino Severini) co-signed the Futurist Manifesto. Modeled on Marx and Engels' famous "Communist Manifesto" (1848), such manifestos put forward ideas that were meant to provoke and to gather followers. However, arguments in favor of geometric or purely abstract painting were, at this time, largely confined to "little magazines" which had only tiny circulations. Modernist primitivism and pessimism were controversial, and the mainstream in the first decade of the 20th century was still inclined towards a faith in progress and liberal optimism.
Abstract artists, taking as their examples the Impressionists, as well as Paul Cézanne (1839–1906) and Edvard Munch (1863–1944), began with the assumption that color and shape, not the depiction of the natural world, formed the essential characteristics of art. Western art had been, from the Renaissance up to the middle of the 19th century, underpinned by the logic of perspective and an attempt to reproduce an illusion of visible reality. The arts of cultures other than the European had become accessible and showed alternative ways of describing visual experience to the artist. By the end of the 19th century, many artists felt a need to create a new kind of art that encompassed the fundamental changes taking place in technology, science and philosophy. The sources from which individual artists drew their theoretical arguments were diverse and reflected the social and intellectual preoccupations in all areas of Western culture at that time. Wassily Kandinsky, Piet Mondrian, and Kazimir Malevich all believed in redefining art as the arrangement of pure color. The use of photography, which had rendered much of the representational function of visual art obsolete, strongly affected this aspect of modernism.
Modernist architects and designers, such as Frank Lloyd Wright and Le Corbusier, believed that new technology rendered old styles of building obsolete. Le Corbusier thought that buildings should function as "machines for living in", analogous to cars, which he saw as machines for traveling in. Just as cars had replaced the horse, so modernist design should reject the old styles and structures inherited from Ancient Greece or the Middle Ages. Following this machine aesthetic, modernist designers typically rejected decorative motifs in design, preferring to emphasize the materials used and pure geometrical forms. The skyscraper is the archetypal modernist building, and the Wainwright Building, a 10-story office building completed in 1891 in St. Louis, Missouri, United States, is among the first skyscrapers in the world. Ludwig Mies van der Rohe's Seagram Building in New York (1956–1958) is often regarded as the pinnacle of this modernist high-rise architecture. Many aspects of modernist design persist within the mainstream of contemporary architecture, though previous dogmatism has given way to a more playful use of decoration, historical quotation, and spatial drama.
In 1913—which was the year of philosopher Edmund Husserl's Ideas, physicist Niels Bohr's quantized atom, Ezra Pound's founding of imagism, the Armory Show in New York, and in Saint Petersburg the "first futurist opera", Mikhail Matyushin's Victory over the Sun—another Russian composer, Igor Stravinsky, composed The Rite of Spring, a ballet that depicts human sacrifice and has a musical score full of dissonance and primitive rhythm. This caused an uproar on its first performance in Paris. At this time, though modernism was still "progressive", it increasingly saw traditional forms and social arrangements as hindering progress and recast the artist as a revolutionary, engaged in overthrowing rather than enlightening society. Also in 1913, a less violent event occurred in France with the publication of the first volume of Marcel Proust's important novel sequence À la recherche du temps perdu (1913–1927) (In Search of Lost Time). This is often presented as an early example of a writer using the stream-of-consciousness technique, but Robert Humphrey comments that Proust "is concerned only with the reminiscent aspect of consciousness" and that he "was deliberately recapturing the past for the purpose of communicating; hence he did not write a stream-of-consciousness novel."
Stream of consciousness was an important modernist literary innovation, and it has been suggested that Arthur Schnitzler (1862–1931) was the first to make full use of it in his short story "Leutnant Gustl" ("None but the brave") (1900). Dorothy Richardson was the first English writer to use it, in the early volumes of her novel sequence Pilgrimage (1915–1967). Other modernist novelists that are associated with the use of this narrative technique include James Joyce in Ulysses (1922) and Italo Svevo in La coscienza di Zeno (1923).
However, with the coming of the Great War of 1914–1918 (World War I) and the Russian Revolution of 1917, the world was drastically changed, and doubt was cast on the beliefs and institutions of the past. The failure of the previous status quo seemed self-evident to a generation that had seen millions die fighting over scraps of earth: before 1914, it had been argued that no one would fight such a war, since the cost was too high. The birth of a machine age, which had made major changes in the conditions of daily life in the 19th century had now radically changed the nature of warfare. The traumatic nature of recent experience altered basic assumptions, and a realistic depiction of life in the arts seemed inadequate when faced with the fantastically surreal nature of trench warfare. The view that mankind was making steady moral progress now seemed ridiculous in the face of the senseless slaughter, described in works such as Erich Maria Remarque's novel All Quiet on the Western Front (1929). Therefore, modernism's view of reality, which had been a minority taste before the war, became more generally accepted in the 1920s.
In literature and visual art, some modernists sought to defy expectations mainly to make their art more vivid or to force the audience to take the trouble to question their own preconceptions. This aspect of modernism has often seemed a reaction to consumer culture, which developed in Europe and North America in the late 19th century. Whereas most manufacturers try to make products that will be marketable by appealing to preferences and prejudices, high modernists reject such consumerist attitudes to undermine conventional thinking. The art critic Clement Greenberg expounded this theory of modernism in his essay Avant-Garde and Kitsch. Greenberg labeled the products of consumer culture "kitsch", because their design aimed simply to have maximum appeal, with any difficult features removed. For Greenberg, modernism thus formed a reaction against the development of such examples of modern consumer culture as commercial popular music, Hollywood, and advertising. Greenberg associated this with the revolutionary rejection of capitalism.
Some modernists saw themselves as part of a revolutionary culture that included political revolution. In Russia after the 1917 Revolution, there was indeed initially a burgeoning of avant-garde cultural activity, which included Russian Futurism. However, others rejected conventional politics as well as artistic conventions, believing that a revolution of political consciousness had greater importance than a change in political structures. But many modernists saw themselves as apolitical. Others, such as T. S. Eliot, rejected mass popular culture from a conservative position. Some even argue that Modernism in literature and art functioned to sustain an elite culture that excluded the majority of the population.
Surrealism, which originated in the early 1920s, came to be regarded by the public as the most extreme form of modernism, or "the avant-garde of modernism". The word "surrealist" was coined by Guillaume Apollinaire and first appeared in the preface to his play Les Mamelles de Tirésias, which was written in 1903 and first performed in 1917. Major surrealists include Paul Éluard, Robert Desnos, Max Ernst, Hans Arp, Antonin Artaud, Raymond Queneau, Joan Miró, and Marcel Duchamp.
By 1930, modernism had won a place in the political and artistic establishment, although by this time modernism itself had changed.
Modernism continues: 1930–1945
Modernism continued to evolve during the 1930s. Between 1930 and 1932 composer Arnold Schoenberg worked on Moses und Aron, one of the first operas to make use of the twelve-tone technique, Pablo Picasso painted in 1937 Guernica, his cubist condemnation of fascism, while in 1939 James Joyce pushed the boundaries of the modern novel further with Finnegans Wake. Also by 1930 modernism began to influence mainstream culture, so that, for example, The New Yorker magazine began publishing work, influenced by modernism, by young writers and humorists like Dorothy Parker, Robert Benchley, E. B. White, S. J. Perelman, and James Thurber, amongst others. Perelman is highly regarded for his humorous short stories that he published in magazines in the 1930s and 1940s, most often in The New Yorker, which are considered to be the first examples of surrealist humor in America. Modern ideas in art also began to appear more frequently in commercials and logos, an early example of which, from 1916, is the famous London Underground logo designed by Edward Johnston.
One of the most visible changes of this period was the adoption of new technologies into the daily lives of ordinary people in Western Europe and North America. Electricity, the telephone, the radio, the automobile—and the need to work with them, repair them and live with them—created social change. The kind of disruptive moment that only a few knew in the 1880s became a common occurrence. For example, the speed of communication reserved for the stock brokers of 1890 became part of family life, at least in middle class North America. Associated with urbanization and changing social mores also came smaller families and changed relationships between parents and their children.
Another strong influence at this time was Marxism. After the generally primitivistic/irrationalism aspect of pre-World War I modernism (which for many modernists precluded any attachment to merely political solutions) and the neoclassicism of the 1920s (as represented most famously by T. S. Eliot and Igor Stravinsky—which rejected popular solutions to modern problems), the rise of fascism, the Great Depression, and the march to war helped to radicalize a generation. Bertolt Brecht, W. H. Auden, André Breton, Louis Aragon, and the philosophers Antonio Gramsci and Walter Benjamin are perhaps the most famous exemplars of this modernist form of Marxism. There were, however, also modernists explicitly of 'the right', including Salvador Dalí, Wyndham Lewis, T. S. Eliot, Ezra Pound, the Dutch author Menno ter Braak and others.
Significant modernist literary works continued to be created in the 1920s and 1930s, including further novels by Marcel Proust, Virginia Woolf, Robert Musil, and Dorothy Richardson. The American modernist dramatist Eugene O'Neill's career began in 1914, but his major works appeared in the 1920s, 1930s and early 1940s. Two other significant modernist dramatists writing in the 1920s and 1930s were Bertolt Brecht and Federico García Lorca. D. H. Lawrence's Lady Chatterley's Lover was privately published in 1928, while another important landmark for the history of the modern novel came with the publication of William Faulkner's The Sound and the Fury in 1929. In the 1930s, in addition to further major works by Faulkner, Samuel Beckett published his first major work, the novel Murphy (1938). Then in 1939 James Joyce's Finnegans Wake appeared. This is written in a largely idiosyncratic language, consisting of a mixture of standard English lexical items and neologistic multilingual puns and portmanteau words, which attempts to recreate the experience of sleep and dreams. In poetry T. S. Eliot, E. E. Cummings, and Wallace Stevens were writing from the 1920s until the 1950s. While modernist poetry in English is often viewed as an American phenomenon, with leading exponents including Ezra Pound, T. S. Eliot, Marianne Moore, William Carlos Williams, H.D., and Louis Zukofsky, there were important British modernist poets, including David Jones, Hugh MacDiarmid, Basil Bunting, and W. H. Auden. European modernist poets include Federico García Lorca, Anna Akhmatova, Constantine Cavafy, and Paul Valéry.
The modernist movement continued during this period in Soviet Russia. In 1930 composer Dimitri Shostakovich's (1906–1975) opera The Nose was premiered, in which he uses a montage of different styles, including folk music, popular song and atonality. Among his influences was Alban Berg's (1885–1935) opera Wozzeck (1925), which "had made a tremendous impression on Shostakovich when it was staged in Leningrad." However, from 1932 socialist realism began to oust modernism in the Soviet Union, and in 1936 Shostakovich was attacked and forced to withdraw his 4th Symphony. Alban Berg wrote another significant, though incomplete, modernist opera, Lulu, which premiered in 1937. Berg's Violin Concerto was first performed in 1935. Like Shostakovich, other composers faced difficulties in this period.
In Germany Arnold Schoenberg (1874–1951) was forced to flee to the U.S. when Hitler came to power in 1933, because of his modernist atonal style as well as his Jewish ancestry. His major works from this period are a Violin Concerto, Op. 36 (1934/36), and a Piano Concerto, Op. 42 (1942). Schoenberg also wrote tonal music in this period with the Suite for Strings in G major (1935) and the Chamber Symphony No. 2 in E minor, Op. 38 (begun in 1906, completed in 1939). During this time Hungarian modernist Béla Bartók (1881–1945) produced a number of major works, including Music for Strings, Percussion and Celesta (1936) and the Divertimento for String Orchestra (1939), String Quartet No. 5 (1934), and No. 6 (his last, 1939). But he too left for the US in 1940, because of the rise of fascism in Hungary. Igor Stravinsky (1882–1971) continued writing in his neoclassical style during the 1930s and 1940s, writing works like the Symphony of Psalms (1930), Symphony in C (1940), and Symphony in Three Movements (1945). He also emigrated to the US because of World War II. Olivier Messiaen (1908–1992), however, served in the French army during the war and was imprisoned at Stalag VIII-A by the Germans, where he composed his famous Quatuor pour la fin du temps ("Quartet for the End of Time"). The quartet was first performed in January 1941 to an audience of prisoners and prison guards.
In painting, during the 1920s and 1930s and the Great Depression, modernism was defined by Surrealism, late Cubism, Bauhaus, De Stijl, Dada, German Expressionism, and modernist and masterful color painters like Henri Matisse and Pierre Bonnard as well as the abstractions of artists like Piet Mondrian and Wassily Kandinsky which characterized the European art scene. In Germany, Max Beckmann, Otto Dix, George Grosz and others politicized their paintings, foreshadowing the coming of World War II, while in America, modernism is seen in the form of American Scene painting and the social realism and Regionalism movements that contained both political and social commentary dominated the art world. Artists like Ben Shahn, Thomas Hart Benton, Grant Wood, George Tooker, John Steuart Curry, Reginald Marsh, and others became prominent. Modernism is defined in Latin America by painters Joaquín Torres-García from Uruguay and Rufino Tamayo from Mexico, while the muralist movement with Diego Rivera, David Siqueiros, José Clemente Orozco, Pedro Nel Gómez and Santiago Martínez Delgado, and Symbolist paintings by Frida Kahlo, began a renaissance of the arts for the region, characterized by a freer use of color and an emphasis on political messages.
Diego Rivera is perhaps best known by the public world for his 1933 mural, Man at the Crossroads, in the lobby of the RCA Building at Rockefeller Center. When his patron Nelson Rockefeller discovered that the mural included a portrait of Vladimir Lenin and other communist imagery, he fired Rivera, and the unfinished work was eventually destroyed by Rockefeller's staff. Frida Kahlo's works are often characterized by their stark portrayals of pain. Kahlo was deeply influenced by indigenous Mexican culture, which is apparent in her paintings' bright colors and dramatic symbolism. Christian and Jewish themes are often depicted in her work as well; she combined elements of the classic religious Mexican tradition, which were often bloody and violent. Frida Kahlo's Symbolist works relate strongly to surrealism and to the magic realism movement in literature.
Political activism was an important piece of David Siqueiros' life, and frequently inspired him to set aside his artistic career. His art was deeply rooted in the Mexican Revolution. The period from the 1920s to the 1950s is known as the Mexican Renaissance, and Siqueiros was active in the attempt to create an art that was at once Mexican and universal. The young Jackson Pollock attended the workshop and helped build floats for the parade.
During the 1930s, radical leftist politics characterized many of the artists connected to surrealism, including Pablo Picasso. On 26 April 1937, during the Spanish Civil War, the Basque town of Gernika was bombed by Nazi Germany's Luftwaffe. The Germans were attacking to support the efforts of Francisco Franco to overthrow the Basque government and the Spanish Republican government. Pablo Picasso painted his mural-sized Guernica to commemorate the horrors of the bombing.
During the Great Depression of the 1930s and through the years of World War II, American art was characterized by social realism and American Scene painting, in the work of Grant Wood, Edward Hopper, Ben Shahn, Thomas Hart Benton, and several others. Nighthawks (1942) is a painting by Edward Hopper that portrays people sitting in a downtown diner late at night. It is not only Hopper's most famous painting, but one of the most recognizable in American art. The scene was inspired by a diner in Greenwich Village. Hopper began painting it immediately after the attack on Pearl Harbor. After this event there was a large feeling of gloominess over the country, a feeling that is portrayed in the painting. The urban street is empty outside the diner, and inside none of the three patrons is apparently looking or talking to the others but instead is lost in their own thoughts. This portrayal of modern urban life as empty or lonely is a common theme throughout Hopper's work.
American Gothic is a painting by Grant Wood from 1930 portraying a pitchfork-holding farmer and a younger woman in front of a house of Carpenter Gothic style, it is one of the most familiar images in 20th-century American art. Art critics had favorable opinions about the painting; like Gertrude Stein and Christopher Morley, they assumed the painting was meant to be a satire of rural small-town life. It was thus seen as part of the trend towards increasingly critical depictions of rural America, along the lines of Sherwood Anderson's 1919 Winesburg, Ohio, Sinclair Lewis's 1920 Main Street, and Carl Van Vechten's The Tattooed Countess in literature. However, with the onset of the Great Depression, the painting came to be seen as a depiction of steadfast American pioneer spirit.
The situation for artists in Europe during the 1930s deteriorated rapidly as the Nazis' power in Germany and across Eastern Europe increased. Degenerate art was a term adopted by the Nazi regime in Germany for virtually all modern art. Such art was banned because it was un-German or Jewish Bolshevist in nature, and those identified as degenerate artists were subjected to sanctions. These included being dismissed from teaching positions, being forbidden to exhibit or to sell their art, and in some cases being forbidden to produce art entirely. Degenerate Art was also the title of an exhibition, mounted by the Nazis in Munich in 1937. The climate became so hostile for artists and art associated with modernism and abstraction that many left for the Americas. German artist Max Beckmann and scores of others fled Europe for New York. In New York City a new generation of young and exciting modernist painters led by Arshile Gorky, Willem de Kooning, and others were just beginning to come of age.
Arshile Gorky's portrait of someone who might be Willem de Kooning is an example of the evolution of Abstract Expressionism from the context of figure painting, Cubism and Surrealism. Along with his friends de Kooning and John D. Graham, Gorky created bio morphically shaped and abstracted figurative compositions that by the 1940s evolved into totally abstract paintings. Gorky's work seems to be a careful analysis of memory, emotion and shape, using line and color to express feeling and nature.
Attacks on early modernism
Modernism's stress on freedom of expression, experimentation, radicalism, and primitivism disregards conventional expectations. In many art forms this often meant startling and alienating audiences with bizarre and unpredictable effects, as in the strange and disturbing combinations of motifs in Surrealism or the use of extreme dissonance and atonality in modernist music. In literature this often involved the rejection of intelligible plots or characterization in novels, or the creation of poetry that defied clear interpretation. Within the Catholic Church, the specter of Protestantism and Martin Luther was at play in anxieties over modernism and the notion that doctrine develops and changes over time.
From 1932, socialist realism began to oust modernism in the Soviet Union, where it had previously endorsed Russian Futurism and Constructivism, primarily under the homegrown philosophy of Suprematism.
The Nazi government of Germany deemed modernism narcissistic and nonsensical, as well as "Jewish" (see Antisemitism) and "Negro". The Nazis exhibited modernist paintings alongside works by the mentally ill in an exhibition entitled "Degenerate Art". Accusations of "formalism" could lead to the end of a career, or worse. For this reason, many modernists of the post-war generation felt that they were the most important bulwark against totalitarianism, the "canary in the coal mine", whose repression by a government or other group with supposed authority represented a warning that individual liberties were being threatened. Louis A. Sass compared madness, specifically schizophrenia, and modernism in a less fascist manner by noting their shared disjunctive narratives, surreal images, and incoherence.
After 1945
While The Oxford Encyclopedia of British Literature states that modernism ended by c. 1939 with regard to British and American literature, "When (if) modernism petered out and postmodernism began has been contested almost as hotly as when the transition from Victorianism to modernism occurred." Clement Greenberg sees modernism ending in the 1930s, with the exception of the visual and performing arts, but with regard to music, Paul Griffiths notes that, while modernism "seemed to be a spent force" by the late 1920s, after World War II, "a new generation of composers—Boulez, Barraqué, Babbitt, Nono, Stockhausen, Xenakis" revived modernism". In fact, many literary modernists lived into the 1950s and 1960s, though generally they were no longer producing major works. The term "late modernism" is also sometimes applied to modernist works published after 1930. Among the modernists (or late modernists) still publishing after 1945 were Wallace Stevens, Gottfried Benn, T. S. Eliot, Anna Akhmatova, William Faulkner, Dorothy Richardson, John Cowper Powys, and Ezra Pound. Basil Bunting, born in 1901, published his most important modernist poem, Briggflatts in 1965. In addition, Hermann Broch's The Death of Virgil was published in 1945 and Thomas Mann's Doctor Faustus in 1947. Samuel Beckett, who died in 1989, has been described as a "later modernist". Beckett is a writer with roots in the Expressionist tradition of modernism, who produced works from the 1930s until the 1980s, including Molloy (1951), Waiting for Godot (1953), Happy Days (1961), and Rockaby (1981). The terms "minimalist" and "post-modernist" have also been applied to his later works. The poets Charles Olson (1910–1970) and J. H. Prynne (born 1936) are among the writers in the second half of the 20th century who have been described as late modernists.
More recently, the term "late modernism" has been redefined by at least one critic and used to refer to works written after 1945, rather than 1930. With this usage goes the idea that the ideology of modernism was significantly re-shaped by the events of World War II, especially the Holocaust and the dropping of the atom bomb.
The post-war period left the capitals of Europe in upheaval, with an urgency to economically and physically rebuild and to politically regroup. In Paris (the former center of European culture and the former capital of the art world), the climate for art was a disaster. Important collectors, dealers, and modernist artists, writers, and poets fled Europe for New York and America. The surrealists and modern artists from every cultural center of Europe had fled the onslaught of the Nazis for safe haven in the United States. Many of those who did not flee perished. A few artists, notably Pablo Picasso, Henri Matisse, and Pierre Bonnard, remained in France and survived.
The 1940s in New York City heralded the triumph of American Abstract Expressionism, a modernist movement that combined lessons learned from Henri Matisse, Pablo Picasso, Surrealism, Joan Miró, Cubism, Fauvism, and early modernism via great teachers in America like Hans Hofmann and John D. Graham. American artists benefited from the presence of Piet Mondrian, Fernand Léger, Max Ernst and the André Breton group, Pierre Matisse's gallery, and Peggy Guggenheim's gallery The Art of This Century, as well as other factors.
Paris, moreover, recaptured much of its luster in the 1950s and 1960s as the center of a machine art florescence, with both of the leading machine art sculptors Jean Tinguely and Nicolas Schöffer having moved there to launch their careers—and which florescence, in light of the technocentric character of modern life, may well have a particularly long-lasting influence.
Theatre of the Absurd
The term "Theatre of the Absurd" is applied to plays, written primarily by Europeans, that express the belief that human existence has no meaning or purpose and therefore all communication breaks down. Logical construction and argument gives way to irrational and illogical speech and to its ultimate conclusion, silence. While there are significant precursors, including Alfred Jarry (1873–1907), the Theatre of the Absurd is generally seen as beginning in the 1950s with the plays of Samuel Beckett.
Critic Martin Esslin coined the term in his 1960 essay "Theatre of the Absurd". He related these plays based on a broad theme of the absurd, similar to the way Albert Camus uses the term in his 1942 essay, The Myth of Sisyphus. The Absurd in these plays takes the form of man's reaction to a world apparently without meaning, and/or man as a puppet controlled or menaced by invisible outside forces. Though the term is applied to a wide range of plays, some characteristics coincide in many of the plays: broad comedy, often similar to vaudeville, mixed with horrific or tragic images; characters caught in hopeless situations forced to do repetitive or meaningless actions; dialogue full of clichés, wordplay, and nonsense; plots that are cyclical or absurdly expansive; either a parody or dismissal of realism and the concept of the "well-made play".
Playwrights commonly associated with the Theatre of the Absurd include Samuel Beckett (1906–1989), Eugène Ionesco (1909–1994), Jean Genet (1910–1986), Harold Pinter (1930–2008), Tom Stoppard (born 1937), Alexander Vvedensky (1904–1941), Daniil Kharms (1905–1942), Friedrich Dürrenmatt (1921–1990), Alejandro Jodorowsky (born 1929), Fernando Arrabal (born 1932), Václav Havel (1936–2011) and Edward Albee (1928–2016).
Pollock and abstract influences
During the late 1940s, Jackson Pollock's radical approach to painting revolutionized the potential for all contemporary art that followed him. To some extent, Pollock realized that the journey toward making a work of art was as important as the work of art itself. Like Pablo Picasso's innovative reinventions of painting and sculpture in the early 20th century via Cubism and constructed sculpture, Pollock redefined the way art is made. His move away from easel painting and conventionality was a liberating signal to the artists of his era and to all who came after. Artists realized that Jackson Pollock's process—placing unstretched raw canvas on the floor where it could be attacked from all four sides using artistic and industrial materials; dripping and throwing linear skeins of paint; drawing, staining, and brushing; using imagery and non-imagery—essentially blasted art-making beyond any prior boundary. Abstract Expressionism generally expanded and developed the definitions and possibilities available to artists for the creation of new works of art.
The other Abstract Expressionists followed Pollock's breakthrough with new breakthroughs of their own. In a sense the innovations of Jackson Pollock, Willem de Kooning, Franz Kline, Mark Rothko, Philip Guston, Hans Hofmann, Clyfford Still, Barnett Newman, Ad Reinhardt, Robert Motherwell, Peter Voulkos and others opened the floodgates to the diversity and scope of all the art that followed them. Re-readings into abstract art by art historians such as Linda Nochlin, Griselda Pollock and Catherine de Zegher critically show, however, that pioneering women artists who produced major innovations in modern art had been ignored by official accounts of its history.
International figures from British art
Henry Moore (1898–1986) emerged after World War II as Britain's leading sculptor. He was best known for his semi-abstract monumental bronze sculptures which are located around the world as public works of art. His forms are usually abstractions of the human figure, typically depicting mother-and-child or reclining figures, usually suggestive of the female body, apart from a phase in the 1950s when he sculpted family groups. His forms are generally pierced or contain hollow spaces.
In the 1950s, Moore began to receive increasingly significant commissions, including a reclining figure for the UNESCO building in Paris in 1958. With many more public works of art, the scale of Moore's sculptures grew significantly. The last three decades of Moore's life continued in a similar vein, with several major retrospectives taking place around the world, notably a prominent exhibition in the summer of 1972 in the grounds of the Forte di Belvedere overlooking Florence. By the end of the 1970s, there were some 40 exhibitions a year featuring his work. On the campus of the University of Chicago in December 1967, 25 years to the minute after the team of physicists led by Enrico Fermi achieved the first controlled, self-sustaining nuclear chain reaction, Moore's Nuclear Energy was unveiled. Also in Chicago, Moore commemorated science with a large bronze sundial, locally named Man Enters the Cosmos (1980), which was commissioned to recognize the space exploration program.
The "London School" of figurative painters, including Francis Bacon (1909–1992), Lucian Freud (1922–2011), Frank Auerbach (born 1931), Leon Kossoff (born 1926), and Michael Andrews (1928–1995), have received widespread international recognition.
Francis Bacon was an Irish-born British figurative painter known for his bold, graphic and emotionally raw imagery. His painterly but abstracted figures typically appear isolated in glass or steel geometrical cages set against flat, nondescript backgrounds. Bacon began painting during his early 20s but worked only sporadically until his mid-30s. His breakthrough came with the 1944 triptych Three Studies for Figures at the Base of a Crucifixion which sealed his reputation as a uniquely bleak chronicler of the human condition. His output can be crudely described as consisting of sequences or variations on a single motif; beginning with the 1940s male heads isolated in rooms, the early 1950s screaming popes, and mid to late 1950s animals and lone figures suspended in geometric structures. These were followed by his early 1960s modern variations of the crucifixion in the triptych format. From the mid-1960s to early 1970s, Bacon mainly produced strikingly compassionate portraits of friends. Following the suicide of his lover George Dyer in 1971, his art became more personal, inward-looking, and preoccupied with themes and motifs of death. During his lifetime, Bacon was equally reviled and acclaimed.
Lucian Freud was a German-born British painter, known chiefly for his thickly impastoed portrait and figure paintings, who was widely considered the pre-eminent British artist of his time. His works are noted for their psychological penetration, and for their often discomforting examination of the relationship between artist and model. According to William Grimes of The New York Times, "Lucien Freud and his contemporaries transformed figure painting in the 20th century. In paintings like Girl with a White Dog (1951–1952), Freud put the pictorial language of traditional European painting in the service of an anti-romantic, confrontational style of portraiture that stripped bare the sitter's social facade. Ordinary people—many of them his friends—stared wide-eyed from the canvas, vulnerable to the artist's ruthless inspection."
After Abstract Expressionism
In abstract painting during the 1950s and 1960s, several new directions like hard-edge painting and other forms of geometric abstraction began to appear in artist studios and in radical avant-garde circles as a reaction against the subjectivism of Abstract Expressionism. Clement Greenberg became the voice of post-painterly abstraction when he curated an influential exhibition of new painting that toured important art museums throughout the United States in 1964. Color field painting, hard-edge painting, and lyrical abstraction emerged as radical new directions.
By the late 1960s however, postminimalism, process art and Arte Povera also emerged as revolutionary concepts and movements that encompassed both painting and sculpture, via lyrical abstraction and the post-minimalist movement, and in early conceptual art. Process art, as inspired by Pollock enabled artists to experiment with and make use of a diverse encyclopaedia of style, content, material, placement, sense of time, aplastic, and real space. Nancy Graves, Ronald Davis, Howard Hodgkin, Larry Poons, Jannis Kounellis, Brice Marden, Colin McCahon, Bruce Nauman, Richard Tuttle, Alan Saret, Walter Darby Bannard, Lynda Benglis, Dan Christensen, Larry Zox, Ronnie Landfield, Eva Hesse, Keith Sonnier, Richard Serra, Pat Lipsky, Sam Gilliam, Mario Merz and Peter Reginato were some of the younger artists who emerged during the era of late modernism that spawned the heyday of the art of the late 1960s.
Pop art
In 1962, the Sidney Janis Gallery mounted The New Realists, the first major pop art group exhibition in an uptown art gallery in New York City. Janis mounted the exhibition in a 57th Street storefront near his gallery. The show had a great impact on the New York School as well as the greater worldwide art scene. Earlier in England in 1958 the term "Pop Art" was used by Lawrence Alloway to describe paintings associated with the consumerism of the post World War II era. This movement rejected Abstract Expressionism and its focus on the hermeneutic and psychological interior in favor of art that depicted material consumer culture, advertising, and the iconography of the mass production age. The early works of David Hockney and the works of Richard Hamilton and Eduardo Paolozzi (who created the ground-breaking I was a Rich Man's Plaything, 1947) are considered seminal examples in the movement. Meanwhile, in the downtown scene in New York's East Village 10th Street galleries, artists were formulating an American version of pop art. Claes Oldenburg had his storefront, and the Green Gallery on 57th Street began to show the works of Tom Wesselmann and James Rosenquist. Later Leo Castelli exhibited the works of other American artists, including those of Andy Warhol and Roy Lichtenstein for most of their careers. There is a connection between the radical works of Marcel Duchamp and Man Ray, the rebellious Dadaists with a sense of humor, and pop artists like Claes Oldenburg, Andy Warhol, and Roy Lichtenstein, whose paintings reproduce the look of Ben-Day dots, a technique used in commercial reproduction .
Minimalism
Minimalism describes movements in various forms of art and design, especially visual art and music, wherein artists intend to expose the essence or identity of a subject through eliminating all nonessential forms, features, or concepts. Minimalism is any design or style wherein the simplest and fewest elements are used to create the maximum effect.
As a specific movement in the arts, it is identified with developments in post–World War II Western art, most strongly with American visual arts in the 1960s and early 1970s. Prominent artists associated with this movement include Donald Judd, John McCracken, Agnes Martin, Dan Flavin, Robert Morris, Ronald Bladen, Anne Truitt, and Frank Stella. It derives from the reductive aspects of modernism and is often interpreted as a reaction against Abstract Expressionism and a bridge to Post minimal art practices. By the early 1960s, minimalism emerged as an abstract movement in art (with roots in the geometric abstraction of Kazimir Malevich, the Bauhaus and Piet Mondrian) that rejected the idea of relational and subjective painting, the complexity of Abstract Expressionist surfaces, and the emotional zeitgeist and polemics present in the arena of action painting. Minimalism argued that extreme simplicity could capture all of the sublime representation needed in art. Minimalism is variously construed either as a precursor to postmodernism, or as a postmodern movement itself. In the latter perspective, early Minimalism yielded advanced modernist works, but the movement partially abandoned this direction when some artists like Robert Morris changed direction in favor of the anti-form movement.
Hal Foster, in his essay The Crux of Minimalism, examines the extent to which Donald Judd and Robert Morris both acknowledge and exceed Greenbergian modernism in their published definitions of minimalism. He argues that minimalism is not a "dead end" of modernism, but a "paradigm shift toward postmodern practices that continue to be elaborated today."
Minimal music
The terms have expanded to encompass a movement in music that features such repetition and iteration as those of the compositions of La Monte Young, Terry Riley, Steve Reich, Philip Glass, and John Adams. Minimalist compositions are sometimes known as systems music. The term 'minimal music' is generally used to describe a style of music that developed in America in the late 1960s and 1970s; and that was initially connected with the composers. The minimalism movement originally involved some composers, and other lesser known pioneers included Pauline Oliveros, Phill Niblock, and Richard Maxfield. In Europe, the music of Louis Andriessen, Karel Goeyvaerts, Michael Nyman, Howard Skempton, Eliane Radigue, Gavin Bryars, Steve Martland, Henryk Górecki, Arvo Pärt and John Tavener.
Postminimalism
In the late 1960s, Robert Pincus-Witten coined the term "postminimalism" to describe minimalist-derived art which had content and contextual overtones that minimalism rejected. The term was applied by Pincus-Witten to the work of Eva Hesse, Keith Sonnier, Richard Serra and new work by former minimalists Robert Smithson, Robert Morris, Sol LeWitt, Barry Le Va, and others. Other minimalists, including Donald Judd, Dan Flavin, Carl Andre, Agnes Martin, John McCracken and others, continued to produce late modernist paintings and sculpture for the remainder of their careers.
Since then, many artists have embraced minimal or post-minimal styles, and the label "postmodern" has been attached to them.
Collage, assemblage, installations
Related to Abstract Expressionism was the emergence of combining manufactured items with artist materials, moving away from previous conventions of painting and sculpture. The work of Robert Rauschenberg exemplifies this trend. His "combines" of the 1950s were forerunners of pop art and installation art, and used assemblages of large physical objects, including stuffed animals, birds and commercial photographs. Rauschenberg, Jasper Johns, Larry Rivers, John Chamberlain, Claes Oldenburg, George Segal, Jim Dine, and Edward Kienholz were among important pioneers of both abstraction and pop art. Creating new conventions of art-making, they made acceptable in serious contemporary art circles the radical inclusion in their works of unlikely materials. Another pioneer of collage was Joseph Cornell, whose more intimately scaled works were seen as radical because of both his personal iconography and his use of found objects.
Neo-Dada
In 1917, Marcel Duchamp submitted a urinal as a sculpture for the inaugural exhibition of the Society of Independent Artists, which was to be staged at the Grand Central Palace in New York. He professed his intent that people look at the urinal as if it were a work of art because he said it was a work of art. This urinal, named Fountain was signed with the pseudonym "R. Mutt". It is also an example of what Duchamp would later call "readymades". This and Duchamp's other works are generally labelled as Dada. Duchamp can be seen as a precursor to conceptual art, other famous examples being John Cage's 4′33″, which is four minutes and thirty three seconds of silence, and Rauschenberg's Erased de Kooning Drawing. Many conceptual works take the position that art is the result of the viewer viewing an object or act as art, not of the intrinsic qualities of the work itself. In choosing "an ordinary article of life" and creating "a new thought for that object", Duchamp invited onlookers to view Fountain as a sculpture.
Marcel Duchamp famously gave up "art" in favor of chess. Avant-garde composer David Tudor created a piece, Reunion (1968), written jointly with Lowell Cross, that features a chess game in which each move triggers a lighting effect or projection. Duchamp and Cage played the game at the work's premier.
Steven Best and Douglas Kellner identify Rauschenberg and Jasper Johns as part of the transitional phase, influenced by Duchamp, between modernism and postmodernism. Both used images of ordinary objects, or the objects themselves, in their work, while retaining the abstraction and painterly gestures of high modernism.
Performance and happenings
During the late 1950s and 1960s artists with a wide range of interests began to push the boundaries of contemporary art. Yves Klein in France, Carolee Schneemann, Yayoi Kusama, Charlotte Moorman and Yoko Ono in New York City, and Joseph Beuys, Wolf Vostell and Nam June Paik in Germany were pioneers of performance-based works of art. Groups like The Living Theatre with Julian Beck and Judith Malina collaborated with sculptors and painters to create environments, radically changing the relationship between audience and performer, especially in their piece Paradise Now. The Judson Dance Theater, located at the Judson Memorial Church, New York; and the Judson dancers, notably Yvonne Rainer, Trisha Brown, Elaine Summers, Sally Gross, Simonne Forti, Deborah Hay, Lucinda Childs, Steve Paxton and others; collaborated with artists Robert Morris, Robert Whitman, John Cage, Robert Rauschenberg, and engineers like Billy Klüver. Park Place Gallery was a center for musical performances by electronic composers Steve Reich, Philip Glass, and other notable performance artists, including Joan Jonas.
These performances were intended as works of a new art form combining sculpture, dance, and music or sound, often with audience participation. They were characterized by the reductive philosophies of Minimalism and the spontaneous improvisation and expressivity of Abstract Expressionism. Images of Schneemann's performances of pieces meant to create shock within the audience are occasionally used to illustrate these kinds of art, and she is often photographed while performing her piece Interior Scroll. However, according to modernist philosophy surrounding performance art, it is cross-purposes to publish images of her performing this piece, for performance artists reject publication entirely: the performance itself is the medium. Thus, other media cannot illustrate performance art; performance is momentary, evanescent, and personal, not for capturing; representations of performance art in other media, whether by image, video, narrative or, otherwise, select certain points of view in space or time or otherwise involve the inherent limitations of each medium. The artists deny that recordings illustrate the medium of performance as art.
During the same period, various avant-garde artists created Happenings, mysterious and often spontaneous and unscripted gatherings of artists and their friends and relatives in various specified locations, often incorporating exercises in absurdity, physicality, costuming, spontaneous nudity, and various random or seemingly disconnected acts. Notable creators of happenings included Allan Kaprow—who first used the term in 1958, Claes Oldenburg, Jim Dine, Red Grooms, and Robert Whitman.
Intermedia, multi-media
Another trend in art which has been associated with the term postmodern is the use of a number of different media together. Intermedia is a term coined by Dick Higgins and meant to convey new art forms along the lines of Fluxus, concrete poetry, found objects, performance art, and computer art. Higgins was the publisher of the Something Else Press, a concrete poet married to artist Alison Knowles and an admirer of Marcel Duchamp. Ihab Hassan includes "Intermedia, the fusion of forms, the confusion of realms," in his list of the characteristics of postmodern art. One of the most common forms of "multi-media art" is the use of video-tape and CRT monitors, termed video art. While the theory of combining multiple arts into one art is quite old, and has been revived periodically, the postmodern manifestation is often in combination with performance art, where the dramatic subtext is removed, and what is left is the specific statements of the artist in question or the conceptual statement of their action.
Fluxus
Fluxus was named and loosely organized in 1962 by George Maciunas (1931–1978), a Lithuanian-born American artist. Fluxus traces its beginnings to John Cage's 1957 to 1959 Experimental Composition classes at The New School for Social Research in New York City. Many of his students were artists working in other media with little or no background in music. Cage's students included Fluxus founding members Jackson Mac Low, Al Hansen, George Brecht and Dick Higgins.
Fluxus encouraged a do-it-yourself aesthetic and valued simplicity over complexity. Like Dada before it, Fluxus included a strong current of anti-commercialism and an anti-art sensibility, disparaging the conventional market-driven art world in favor of an artist-centered creative practice. Fluxus artists preferred to work with whatever materials were at hand, and either created their own work or collaborated in the creation process with their colleagues.
Andreas Huyssen criticizes attempts to claim Fluxus for postmodernism as "either the master-code of postmodernism or the ultimately unrepresentable art movement—as it were, postmodernism's sublime." Instead he sees Fluxus as a major Neo-Dadaist phenomenon within the avant-garde tradition. It did not represent a major advance in the development of artistic strategies, though it did express a rebellion against "the administered culture of the 1950s, in which a moderate, domesticated modernism served as ideological prop to the Cold War."
Avant-garde popular music
Modernism had an uneasy relationship with popular forms of music (both in form and aesthetic) while rejecting popular culture. Despite this, Stravinsky used jazz idioms on his pieces like "Ragtime" from his 1918 theatrical work Histoire du Soldat and 1945's Ebony Concerto.
In the 1960s, as popular music began to gain cultural importance and question its status as commercial entertainment, musicians began to look to the post-war avant-garde for inspiration. In 1959, music producer Joe Meek recorded I Hear a New World (1960), which Tiny Mix Tapes Jonathan Patrick calls a "seminal moment in both electronic music and avant-pop history [...] a collection of dreamy pop vignettes, adorned with dubby echoes and tape-warped sonic tendrils" which would be largely ignored at the time. Other early Avant-pop productions included the Beatles's 1966 song "Tomorrow Never Knows", which incorporated techniques from musique concrète, avant-garde composition, Indian music, and electro-acoustic sound manipulation into a 3-minute pop format, and the Velvet Underground's integration of La Monte Young's minimalist and drone music ideas, beat poetry, and 1960s pop art.
Late period
The continuation of Abstract Expressionism, color field painting, lyrical abstraction, geometric abstraction, minimalism, abstract illusionism, process art, pop art, postminimalism, and other late 20th-century modernist movements in both painting and sculpture continued through the first decade of the 21st century and constitute radical new directions in those mediums.
At the turn of the 21st century, well-established artists such as Sir Anthony Caro, Lucian Freud, Cy Twombly, Robert Rauschenberg, Jasper Johns, Agnes Martin, Al Held, Ellsworth Kelly, Helen Frankenthaler, Frank Stella, Kenneth Noland, Jules Olitski, Claes Oldenburg, Jim Dine, James Rosenquist, Alex Katz, Philip Pearlstein, and younger artists including Brice Marden, Chuck Close, Sam Gilliam, Isaac Witkin, Sean Scully, Mahirwan Mamtani, Joseph Nechvatal, Elizabeth Murray, Larry Poons, Richard Serra, Walter Darby Bannard, Larry Zox, Ronnie Landfield, Ronald Davis, Dan Christensen, Pat Lipsky, Joel Shapiro, Tom Otterness, Joan Snyder, Ross Bleckner, Archie Rand, Susan Crile, and others continued to produce vital and influential paintings and sculpture.
Modern architecture
Many skyscrapers in Hong Kong and Frankfurt have been inspired by Le Corbusier and modernist architecture, and his style is still used as influence for buildings worldwide.
Modernism in Asia
The terms "modernism" and "modernist", according to scholar William J. Tyler, "have only recently become part of the standard discourse in English on modern Japanese literature and doubts concerning their authenticity vis-à-vis Western European modernism remain". Tyler finds this odd, given "the decidedly modern prose" of such "well-known Japanese writers as Kawabata Yasunari, Nagai Kafu, and Jun'ichirō Tanizaki". However, "scholars in the visual and fine arts, architecture, and poetry readily embraced "modanizumu" as a key concept for describing and analysing Japanese culture in the 1920s and 1930s". In 1924, various young Japanese writers, including Kawabata and Riichi Yokomitsu started a literary journal Bungei Jidai ("The Artistic Age"). This journal was "part of an 'art for art's sake' movement, influenced by European Cubism, Expressionism, Dada, and other modernist styles".
Japanese modernist architect Kenzō Tange (1913–2005) was one of the most significant architects of the 20th century, combining traditional Japanese styles with modernism, and designing major buildings on five continents. Tange was also an influential patron of the Metabolist movement. He said: "It was, I believe, around 1959 or at the beginning of the sixties that I began to think about what I was later to call structuralism", He was influenced from an early age by the Swiss modernist, Le Corbusier, Tange gained international recognition in 1949 when he won the competition for the design of Hiroshima Peace Memorial Park.
In China, the "New Sensationists" (新感觉派, Xīn Gǎnjué Pài) were a group of writers based in Shanghai who in the 1930s and 1940s, were influenced, to varying degrees, by Western and Japanese modernism. They wrote fiction that was more concerned with the unconscious and with aesthetics than with politics or social problems. Among these writers were Mu Shiying and Shi Zhecun.
In India, the Progressive Artists' Group was a group of modern artists, mainly based in Mumbai, India formed in 1947. Though it lacked any particular style, it synthesized Indian art with European and North America influences from the first half of the 20th century, including Post-Impressionism, Cubism and Expressionism.
Modernism in Africa
Peter Kalliney suggests that "Modernist concepts, especially aesthetic autonomy, were fundamental to the literature of decolonization in anglophone Africa." In his opinion, Rajat Neogy, Christopher Okigbo, and Wole Soyinka, were among the writers who "repurposed modernist versions of aesthetic autonomy to declare their freedom from colonial bondage, from systems of racial discrimination, and even from the new postcolonial state".
Relationship with postmodernism
By the early 1980s, the postmodern movement in art and architecture began to establish its position through various conceptual and intermedia formats. Postmodernism in music and literature began to take hold earlier. In music, postmodernism is described in one reference work as a "term introduced in the 1970s", while in British literature, The Oxford Encyclopaedia of British Literature sees modernism "ceding its predominance to postmodernism" as early as 1939. However, dates are highly debatable, especially as, according to Andreas Huyssen: "one critic's postmodernism is another critic's modernism." This includes those who are critical of the division between the two, see them as two aspects of the same movement, and believe that late modernism continues.
Modernism is an all-encompassing label for a wide variety of cultural movements. Postmodernism is essentially a centralized movement that named itself, based on socio-political theory, although the term is now used in a wider sense to refer to activities from the 20th century onwards which exhibit awareness of and reinterpret the modern.
Postmodern theory asserts that the attempt to canonize modernism "after the fact" is doomed to unresolvable contradictions. And since the crux of postmodernism critiques any claim to a single discernible truth, postmodernism and modernism conflict on the existence of truth. Where modernists approach the issue of 'truth' with different theories (correspondence, coherence, pragmatist, semantic, etc.), postmodernists approach the issue of truth negatively by disproving the very existence of an accessible truth.
In a narrower sense, what was modernist was not necessarily also postmodernist. Those elements of modernism which accentuated the benefits of rationality and socio-technological progress were only modernist.
Modernist reactions against postmodernism include remodernism, which rejects the cynicism and deconstruction of postmodern art in favor of reviving early modernist aesthetic currents.
Criticism of late modernity
Although artistic modernism tended to reject capitalist values such as consumerism, 20th century civil society embraced global mass production and the proliferation of cheap and accessible commodities. This period of social development is known as "late or high modernity" and originates in advanced in Western societies. The German sociologist Jürgen Habermas, in The Theory of Communicative Action (1981), developed the first substantive critique of the culture of late modernity. Another important early critique of late modernity is the American sociologist George Ritzer's The McDonaldization of Society (1993). Ritzer describes how late modernity became saturated with fast food consumer culture. Other authors have demonstrated how modernist devices appeared in popular cinema, and later on in music videos. Modernist design has entered the mainstream of popular culture, as simplified and stylized forms became popular, often associated with dreams of a space age high-tech future.
In 2008, Janet Bennett published Modernity and Its Critics through The Oxford Handbook of Political Theory. Merging of consumer and high -end versions of modernist culture led to a radical transformation of the meaning of "modernism". First, it implied that a movement based on the rejection of tradition had become a tradition of its own. Second, it demonstrated that the distinction between elite modernist and mass consumerist culture had lost its precision. Modernism had become so institutionalized that it was now "post avant-garde", indicating that it had lost its power as a revolutionary movement. Many have interpreted this transformation as the beginning of the phase that became known as postmodernism. For others, such as art critic Robert Hughes, postmodernism represents an extension of modernism.
"Anti-Modern" or "Counter-Modern" movements seek to emphasize holism, connection and spirituality as remedies or antidotes to modernism. Such movements see modernism as reductionist, and therefore subject to an inability to see systemic and emergent effects.
Some traditionalist artists like Alexander Stoddart reject modernism generally as the product of "an epoch of false money allied with false culture".
In some fields, the effects of modernism have remained stronger and more persistent than in others. Visual art has made the most complete break with its past. Most major capital cities have museums devoted to modern art as distinct from post-Renaissance art ( to ). Examples include the Museum of Modern Art in New York, the Tate Modern in London, and the Centre Pompidou in Paris. These galleries make no distinction between modernist and postmodernist phases, seeing both as developments within modern art.
See also
Footnotes
References
Sources
John Barth (1979) The Literature of Replenishment, later republished in The Friday Book (1984).
Eco, Umberto (1990) Interpreting Serials in The limits of interpretation, pp. 83–100, excerpt
Everdell, William R. (1997) The First Moderns: Profiles in the Origins of Twentieth Century Thought (Chicago: University of Chicago Press).
Orton, Fred and Pollock, Griselda (1996) Avant-Gardes and Partisans Reviewed, Manchester University.
Steiner, George (1998) After Babel, ch.6 Topologies of culture, 3rd revised edition
Art Berman (1994) Preface to Modernism, University of Illinois Press.
Further reading
Robert Archambeau. "The Avant-Garde in Babel. Two or Three Notes on Four or Five Words", Action-Yes vol. 1, issue 8 Autumn 2008.
Armstrong, Carol and de Zegher, Catherine (eds.), Women Artists as the Millennium, Cambridge, MA: October Books, MIT Press, 2006. .
Aspray, William & Philip Kitcher, eds., History and Philosophy of Modern Mathematics, Minnesota Studies in the Philosophy of Science vol. XI, Minneapolis: University of Minnesota Press, 1988
Bäckström, Per (ed.), Centre-Periphery. The Avant-Garde and the Other , Nordlit. University of Tromsø, no. 21, 2007.
Bäckström, Per. "One Earth, Four or Five Words. The Peripheral Concept of 'Avant-Garde'" , Action-Yes vol. 1, issue 12 Winter 2010
Bäckström, Per & Bodil Børset (eds.), Norsk avantgarde (Norwegian Avant-Garde), Oslo: Novus, 2011.
Bäckström, Per & Benedikt Hjartarson (eds.), Decentring the Avant-Garde , Amsterdam & New York: Rodopi, Avantgarde Critical Studies, 2014.
Bäckström, Per and Benedikt Hjartarson. "Rethinking the Topography of the International Avant-Garde", in Decentring the Avant-Garde , Per Bäckström & Benedikt Hjartarson (eds.), Amsterdam & New York: Rodopi, Avantgarde Critical Studies, 2014.
Baker, Houston A. Jr., Modernism and the Harlem Renaissance, Chicago: University of Chicago Press, 1987
Berman, Marshall, All That Is Solid Melts into Air: The Experience of Modernity. Second ed. London: Penguin, 1982. .
Bradbury, Malcolm, & James McFarlane (eds.), Modernism: A Guide to European Literature 1890–1930 (Penguin "Penguin Literary Criticism" series, 1978, ).
Brush, Stephen G., The History of Modern Science: A Guide to the Second Scientific Revolution, 1800–1950, Ames, IA: Iowa State University Press, 1988
Centre Georges Pompidou, Face a l'Histoire, 1933–1996. Flammarion, 1996. .
Crouch, Christopher, Modernism in art design and architecture, New York: St. Martin's Press, 2000
Eysteinsson, Astradur, The Concept of Modernism, Ithaca, NY: Cornell University Press, 1992
Friedman, Julia . Beyond Symbolism and Surrealism: Alexei Remizov's Synthetic Art, Northwestern University Press, 2010. (Trade Cloth)
Frascina, Francis, and Charles Harrison (eds.). Modern Art and Modernism: A Critical Anthology. Published in association with The Open University. London: Harper and Row, Ltd. Reprinted, London: Paul Chapman Publishing, Ltd., 1982.
Gates, Henry Louis. The Norton Anthology of African American Literature. W.W. Norton & Company, Inc., 2004.
Hughes, Robert, The Shock of the New: Art and the Century of Change (Gardners Books, 1991, ).
Kenner, Hugh, The Pound Era (1971), Berkeley, CA: University of California Press, 1973
Kern, Stephen, The Culture of Time and Space, Cambridge, MA: Harvard University Press, 1983
Klein, Jürgen, On Modernism, Berlin, Bruxelles, Lausanne, New York Oxford: Peter Lang, 2022 ISBN 978-3-631-87869-9.
Kolocotroni, Vassiliki et al., ed.,Modernism: An Anthology of Sources and Documents (Edinburgh: Edinburgh University Press, 1998).
Levenson, Michael, (ed.), The Cambridge Companion to Modernism (Cambridge University Press, "Cambridge Companions to Literature" series, 1999, ).
Lewis, Pericles. The Cambridge Introduction to Modernism (Cambridge: Cambridge University Press, 2007).
Nicholls, Peter, Modernisms: A Literary Guide (Hampshire and London: Macmillan, 1995).
Pevsner, Nikolaus, Pioneers of Modern Design: From William Morris to Walter Gropius (New Haven, CT: Yale University Press, 2005, ).
The Sources of Modern Architecture and Design (Thames & Hudson, "World of Art" series, 1985, ).
Pollock, Griselda, Generations and Geographies in the Visual Arts. (Routledge, London, 1996. ).
Pollock, Griselda, and Florence, Penny, Looking Back to the Future: Essays by Griselda Pollock from the 1990s. (New York: G&B New Arts Press, 2001. )
Sass, Louis A. (1992). Madness and Modernism: Insanity in the Light of Modern Art, Literature, and Thought. New York: Basic Books. Cited in Bauer, Amy (2004). "Cognition, Constraints, and Conceptual Blends in Modernist Music", in The Pleasure of Modernist Music. .
Schorske, Carl. Fin-de-Siècle Vienna: Politics and Culture. Vintage, 1980. .
Schwartz, Sanford, The Matrix of Modernism: Pound, Eliot, and Early Twentieth Century Thought, Princeton, NJ: Princeton University Press, 1985
Tyler, William J., ed. Modanizumu: Modernist Fiction from Japan, 1913–1938. University of Hawai'i Press, 2008.
Van Loo, Sofie (ed.), Gorge(l). Royal Museum of Fine Arts, Antwerp, 2006. .
Weir, David, Decadence and the Making of Modernism, 1995, University of Massachusetts Press, .
Weston, Richard, Modernism (Phaidon Press, 2001, ).
de Zegher, Catherine, Inside the Visible. (Cambridge, MA: MIT Press, 1996).
External links
Ballard, J. G., on Modernism.
Denzer, Anthony S., PhD, Masters of Modernism.
Hoppé, E. O., photographer, Edwardian Modernists.
Malady of Writing. Modernism you can dance to An online radio show that presents a humorous version of Modernism
Modernism Lab @ Yale University
Modernism/Modernity , official publication of the Modernist Studies Association
Modernism vs. Postmodernism
Aesthetics
Architectural styles
Art movements
Modernism
Theories of aesthetics | 0.760995 | 0.999289 | 0.760454 |
Context collapse | Context collapse or "the flattening of multiple audiences into a single context" is a term arising out of the study of human interaction on the internet, especially within social media. Context collapse "generally occurs when a surfeit of different audiences occupy the same space, and a piece of information intended for one audience finds its way to another" with that new audience's reaction being uncharitable and highly negative for failing to understand the original context.
History
The term grew out of the work of Erving Goffman and Joshua Meyrowitz. In his book No Sense of Place (1985), Meyrowitz first applied the concept to media like television and the radio. He claimed that this new kind of technology broke barriers between different kinds of audiences as the content being produced was broadcast widely. In The Presentation of Self in Everyday Life, Erving Goffman argues that individuals develop "audience segregation" whereby they make sure that they segregate one audience to whom they perform one role from the other audiences to whom they play a different role. Context collapse arises out of the failure to do so. This is partly because of the inclination to imply during an interaction that one's performance is their most important role performance (an impression that would collapse if different audiences to whom one performs differently were to be integrated) and that there is a uniqueness to one's relationship and role performance to a given audience.
Michael Wesch used the term context collapse in his 2008 lecture "An Anthropological Introduction to YouTube." The term was first used in print by danah boyd, Alice Marwick, and Wesch. boyd is credited with coining the term "collapsed contexts" in the early 2000s in reference to social media sites like Myspace and Friendster.
In social media
The concept of context collapse has become much more prominent with the rise of social media because many of these platforms, like Twitter, restrict users from specifically identifying and determining their audience. On Twitter, context collapse is seen with the retweeting functionality. When a public user posts a social media post known as a 'tweet', it can be retweeted by anyone, thus introducing the content to a new audience. To avoid any unwanted attention, some users may resort to the 'lowest common denominator' approach. This is when a user may only post content online they know would be appropriate for all of their audience members.
Types of context collapse
As defined by linguist Jenny L. Davis and sociologist Nathan Jurgenson, there are two main types of context collapse: context collusions and context collisions. Context collusions are considered to be intentional while context collisions are considered to be unintentional.
An example of context collusion offline may be a wedding where different social circles are purposefully combined. Online, context collusion is seen on social media sites like Facebook where one may create a post to garner attention from various social groups.
Context collision is seen in the case where someone makes a joke about someone else, not realizing they are also listening. On the web, an example of context collision is when companies accidentally make private information about their users available.
See also
Poe's law
Contextual Integrity
References
Sociological terminology
Linguistics | 0.763656 | 0.995756 | 0.760415 |
Classroom management | Classroom management is the process teachers use to ensure that classroom lessons run smoothly without disruptive behavior from students compromising the delivery of instruction. It includes the prevention of disruptive behavior preemptively, as well as effectively responding to it after it happens. Such disruptions may range from normal peer conflict to more severe disturbances of the social class dynamics, such as bullying among students, which make it impossible for the affected students to concentrate on their schoolwork and result in a significant deterioration of their school performance.
It is a difficult aspect of teaching for many teachers. Problems in this area causes some to leave teaching. In 1981, the US National Educational Association reported that 36% of teachers said they would probably not go into teaching if they had to decide again. A major reason was negative student attitudes and discipline.
Classroom management is crucial in classrooms because it supports the proper execution of curriculum development, developing best teaching practices, and putting them into action. Classroom management can be explained as the actions and directions that teachers use to create a successful learning environment; indeed, having a positive impact on students achieving given learning requirements and goals. In an effort to ensure all students receive the best education it would seem beneficial for educator programs to spend more time and effort in ensuring educators and instructors are well versed in classroom management.
Teachers do not focus on learning classroom management, because higher education programs do not put an emphasis on the teacher attaining classroom management; indeed, the focus is on creating a conducive learning atmosphere for the students. These tools enable teachers to have the resources available to properly and successfully educate upcoming generations, and ensure future successes as a nation. According to Moskowitz & Hayman (1976), once a teacher loses control of their classroom, it becomes increasingly more difficult for them to regain that control.
Also, research from Berliner (1988) and Brophy & Good (1986) shows that the time a teacher must take to correct misbehavior caused by poor classroom management skills results in a lower rate of academic engagement in the classroom. From the student's perspective, effective classroom management involves clear communication of behavioral and academic expectations as well as a cooperative learning environment.
Techniques
Corporal punishment
Until recently, corporal punishment was widely used as a means of controlling disruptive behavior but it is now illegal in most schools. It is still advocated in some contexts by religious leaders such as James Dobson, but his views "diverge sharply from those recommended by contemporary mainstream experts" and are not based on empirical testing, but rather are a reflection of his faith-based beliefs.
According to studies, taboo physical punishments like spanking or procedures used in Asia in the classroom such as standing, do not make students or children more aggressive. Consistency seems to play a greater role on whether outcomes could be negative.
Corporal punishment is now banned in most schools in the United States, and in most developed countries. Although its effectiveness was never proven, the punishment was very disproportionately met. African American males were the most punished group. In a study conducted in 2006, 17.1 percent of students who experienced corporal punishment were African Americans, and 78.3 percent of total students were males.
Good teacher-student relationships
Some characteristics of having good teacher-student relationships in the classroom involves the appropriate levels of dominance, cooperation, professionalism, and awareness of high-needs students. Dominance is defined as the teacher's ability to give clear purpose and guidance concerning student behavior and their academics. By creating clear expectations and consequences for student behavior, this builds effective relationships. Such expectations may cover classroom etiquette and behavior, group work, seating arrangements, the use of equipment and materials, and also classroom disruptions. These expectations should always be enforced with consistency among all students within the class. Inconsistency is viewed by students as unfair and will result in the students having less respect for the teacher. Assertive teacher behavior also reassures those thoughts and messages are being passed on to the student in an effective way. Assertive behavior can be achieved by using erect posture, appropriate tone of voice depending on the current situation, and taking care not to ignore inappropriate behavior by taking action. Another great strategy to build a good teacher- student relationship is using inclusive pronouns. For example, if a class is misbehaving and are getting off track, instead of saying "you need to get back to work" a teacher may say "we've got a lot of work to do today, so let's get back to it." Another technique to establishing good teacher-student relationships is William Purkey's "three pluses and a wish." These pluses are complimenting that the teacher gives to the student before making a request. The pluses help the student get into a mindset that is more likely to cooperate with the teacher. An example might look like this: "Thanks so much for your participation in class today. I love hearing your comments. I think you provided a fair amount of educational insight to the discussion. I would appreciate if you could raise your hand before commenting, so that other students can follow your example."
Preventive techniques
Preventive approaches to classroom management involve creating a positive classroom community with mutual respect between teacher and student. Teachers using the preventive approach offer warmth, acceptance, and support unconditionally – not based on a student's behavior. Fair rules and consequences are established and students are given frequent and consistent feedback regarding their behavior. One way to establish this kind of classroom environment is through the development and use of a classroom contract. The contract should be created by both students and the teacher. In the contract, students and teachers decide and agree on how to treat one another in the classroom. The group also decides on and agrees to what the group will do if someone violates the contract. Rather than a consequence, the group should decide how to fix the problem through either class discussion, peer mediation, counseling, or by one-on-one conversations leading to a solution to the situation.
Preventive techniques also involve the strategic use of praise and rewards to inform students about their behavior rather than as a means of controlling student behavior. To use rewards to inform students about their behavior, teachers must emphasize the value of the behavior that is rewarded and also explain to students the specific skills they demonstrated to earn the reward. Teachers should also encourage student collaboration in selecting rewards and defining appropriate behaviors that earn rewards. This form of praise and positive reinforcement is very effective in helping students understand expectations and builds a student's self-concept.
An often-overlooked preventative technique is to over-plan. Students tend to fill in the awkward pauses or silences in the class. When teachers over-plan, they have plenty of material and activities to fill the class time, thus reducing opportunities for students to have time to misbehave. Transition time can be an opportunity for students to be disruptive. To minimize this, transitions need to be less than 30 seconds. The teacher must be prepared and organized as well as students being prepared and organized for a day of learning. An organizational routine must be implemented at the beginning of the year and reinforced daily until it is instinctive.
The Blue vs Orange Card Theory
The blue card vs orange card theory was introduced by William Purkey, which suggests that students need supportive, encouraging statements to feel valuable, able, and responsible. "Many messages are soothing, encouraging and supportive. These messages are 'blue cards' - they encourage a positive self-concept. Other messages are critical, discouraging, demeaning. These cards are 'orange' – the international color of distress". The goal is to fill the students' 'file box' with more 'blue cards' than 'orange cards' to help with students' perspective of learning.
High cards and low cards
An intervention technique created by William Purkey, used by the teacher that gives students the level of management needed. Low cards are a less invasive intervention to address what is happening. Some examples of a low card intervention are: raising the eyebrows, staring politely at the student, moving closer to the student while continually talking, calling student by name and asking if they are listening. High cards are a strong intervention to address what is happening. Some examples include: sending student to the principal's office, keeping student after school, calling home.
Systematic approaches
Assertive discipline
Assertive discipline is an approach designed to assist educators in running a teacher-in-charge classroom environment. Assertive teachers react to situations that require the management of student behavior confidently. Assertive teachers do not use an abrasive, sarcastic, or hostile tone when disciplining students.
Assertive discipline is one of the most widely used classroom management tactics in the world. It demands student compliance and requires teachers to be firm. This method draws a clear line between aggressive discipline and assertive discipline. The standards and rules set in place by assertive discipline are supported by positive reinforcement as well as negative consequences. Teachers using this approach carry themselves confidently and have no tolerance for class disruption. They are not timid, and remain consistent and just.
Constructivist discipline
A constructivist, student-centered approach to classroom management is based on the assignment of tasks in response to student disruption that are "(1) easy for the student to perform, (2) developmentally enriching, (3) progressive, so a teacher can up the ante if needed, (4) based on students' interests, (5) designed to allow the teacher to stay in charge, and (6) foster creativity and play in the classroom." Compliance rests on assigning disciplinary tasks that the student will want to do, in concert with the teacher rapidly assigning more of the task if the student does not initially comply. Once the student complies, the role of the teacher as the person in charge (i.e. in loco parentis) has been re-established peacefully, creatively, and with respect for students' needs. Claimed benefits include increased student trust and long-term emotional benefits from the modeling of creative solutions to difficulties without resorting to a threat of violence or force.
Culturally responsive classroom management
Culturally responsive classroom management (CRCM) is an approach to running classrooms with all children [not simply for racial/ethnic minority children] in a culturally responsive way. More than a set of strategies or practices, CRCM is a pedagogical approach that guides the management decisions that teachers make. It is a natural extension of culturally responsive teaching, which uses students' backgrounds, rendering of social experiences, prior knowledge, and learning styles in daily lessons. Teachers, as culturally responsive classroom managers, recognize their biases
and values and reflect on how these influence their expectations for behavior and their interactions with students as well as what learning looks like. There is extensive research on traditional classroom management and a myriad of resources available on how to deal with behavior issues. Conversely, there is little research on CRCM, despite the fact that teachers who lack cultural competence often experience problems in this area.
Discipline without Stress, Punishments or Rewards
Discipline without Stress (or DWS) is a K-12 discipline and learning approach developed by Marvin Marshall and described in his 2001 book, Discipline without Stress, Punishments or Rewards. The approach is designed to educate young people about the value of internal motivation. The intention is to prompt and develop within youth a desire to become responsible and self-disciplined and to put forth effort to learn. The most significant characteristics of DWS are that it is totally noncoercive (but not permissive) and takes the opposite approach to Skinnerian behaviorism that relies on external sources for reinforcement. According to Marvin Marshall's book, there are three principles to practice. The first principle is 'Positivity', where he explains that "Teachers [should be] practic[ing] changing negatives into positives. "No running" becomes "We walk in the hallways." "Stop talking" becomes "This is quiet time." The second principle as described by Marvin Marshall is 'Choice', and he says, "Choice-response thinking is taught—as well as impulse control—so students are not victims of their own impulses." The third principle is 'Reflection', "[because] a person can only control another person temporarily and because no one can actually change another person, asking REFLECTIVE questions is the most effective approach for actuating change in others."
Provide flexible learning goals
Instructors can demonstrate a suitable level of strength by giving clear learning objectives, they can also pass on fitting levels of participation by giving learning objectives that can be changed based on the classes needs. Allowing students to participate in their own learning goals and outcomes at the start of a unit brings a sense of cooperation and mutual understanding between the instructor and student. One way of involving the students and in turn making them feel heard in the decision making of the class is by asking what topics they would find most intriguing in learning based on a guided rubric. This approach will engage and send a message to the students that the teacher is interested in the student's interests. The student in turn will bring greater learning outcomes as well as a mutual respect. Posting appropriate learning objectives where the students can see them and refer to them is vital in carrying out the objectives. Make learning goals clear and not a mystery. Students who do not know what the teacher wants them to do are unlikely to learn the material and understand what is being taught. When the teacher also clearly knows the goal the lesson will progress more smoothly and they can work every student toward that central goal.
The Good Behavior Game
The Good Behavior Game (GBG) is a "classroom-level approach to behavior management" that was originally used in 1969 by Barrish, Saunders, and Wolf. The Game entails the class earning access to a reward or losing a reward, given that all members of the class engage in some type of behavior (or did not exceed a certain amount of undesired behavior). The GBG can be used to increase desired behaviors (e.g., question asking) or to decrease undesired behaviors (e.g., out of seat behavior). The GBG has been used with preschoolers as well as adolescents, however most applications have been used with typically developing students (i.e., those without developmental disabilities). In addition, the Game "is usually popular with and acceptable to students and teachers."
Positive classrooms
Robert DiGiulio has developed what he calls "positive classrooms". DiGiulio sees positive classroom management as the result of four factors: how teachers regard their students (spiritual dimension), how they set up the classroom environment (physical dimension), how skillfully they teach content (instructional dimension), and how well they address student behavior (managerial dimension). In positive classrooms student participation and collaboration are encouraged in a safe environment that has been created. A positive classroom environment can be encouraged by being consistent with expectations, using students' names, providing choices when possible, and having an overall trust in students. So As educators, we have daily opportunities to help students grow confidence and feel good about themselves. Despite all the negativity that may be around them within their households. Through such actions as boosting their self-esteem through praise, helping them work through any feelings of alienation, depression, and anger, and helping them realize and honor their intrinsic worth as human beings. May result in better behavior in the long line jeopardy of the students.
Praise in the classroom
Using behavior-specific praise (BSP) in the classroom can have many positive effects on the students and classroom management. BSP is when the teacher praises the student for the exact behavior that the student is exhibiting. For example, the student might normally have trouble staying in their seat, which causes disruption in the classroom. When the student stays in their seat, the teacher might say that they are proud of the student for this behavior. This would help the student feel validated for a positive behavior and would increase the likelihood of the positive behavior happening again.
As a process
In the Handbook of Classroom Management: Research Practice and Contemporary Issues (2006), Evertson and Weinstein characterize classroom management as the actions taken to create an environment that supports and facilitates academic and social–emotional learning. Toward this goal, teachers must:
develop caring, supportive relationships with and among students
organize and implement instruction in ways that optimize students' access to learning
use group management methods that encourage students' engagement in academic tasks
promote the development of students' social skills and self–regulation
use appropriate interventions to assist students with behavior problems.
As time management
In their introductory text on teaching, Kauchak and Eggen (2008) explain classroom management in terms of time management. The goal of classroom management, to Kauchak and Eggen, is to not only maintain order but to optimize student learning. They divide class time into four overlapping categories, namely allocated time, instructional time, engaged time, and academic learning time.
Academic learning time
Academic learning time occurs when students participate actively and are successful in learning activities. Effective classroom management maximizes academic learning time.
Allocated time
Allocated time is the total time allotted for teaching, learning, routine classroom procedures, checking attendance, and posting or delivering announcements.
Allocated time is also what appears on each student's schedule, for example "Introductory Algebra: 9:50–10:30 a.m." or "Fine Arts 1:15–2:00 p.m."
Engaged time
Engaged time is also called time on task. During engaged time, students are participating actively in learning activities—asking and responding to questions, completing worksheets and exercises, preparing skits and presentations, etc. This is an important part of the school day because when students are engaged (actively) they are learning.
Instructional time
Instructional time is what remains after routine classroom procedures are completed. That is to say, instructional time is the time wherein teaching and learning actually takes place. Teachers may spend two or three minutes taking attendance, for example, before their instruction begins. The time it takes for the teacher to do routine tasks can severely limit classroom instruction. Teachers must get a handle on classroom management to be effective.
Common mistakes
In an effort to maintain order in the classroom, teachers can be mindful of the specific methods of classroom management that they apply with their particular group of students and consider how they will respond when certain strategies are implemented in the classroom. Teachers can consider the ways in which each strategy is able to be best integrated into their instruction in order to avoid potential conflicts or negative student responses. A common error made by teachers is to define the problem behavior by how it looks without considering its function. By considering how students might respond to specific methods of classroom management, teachers can plan which strategies will be the most successful when used with their particular students.
Interventions are more likely to be effective when they are individualized to address the specific function of the problem behavior. Two students with similar looking misbehavior may require entirely different intervention strategies if the behaviors are serving different functions. Teachers need to understand that they need to be able to change the ways they do things from year to year, as the children change. Not every approach works for every child. Teachers need to learn to be flexible. Another common mistake is for the teacher to become increasingly frustrated and negative when an approach is not working.
The teacher may raise his or her voice or increase adverse consequences in an effort to make the approach work. This type of interaction may impair the teacher-student relationship. Instead of allowing this to happen, it is often better to simply try a new approach.
Inconsistency in expectations and consequences is an additional mistake that can lead to dysfunction in the classroom. Teachers must be consistent in their expectations and consequences to help ensure that students understand that rules will be enforced. To avoid this, teachers should communicate expectations to students clearly and be sufficiently committed to the classroom management procedures to enforce them consistently.
"Ignoring and approving" is an effective classroom management strategy. This involves ignoring students when they behave undesirably and approving their behavior when it is desirable. When students are praised for their good behavior but ignored for their bad behavior, this may increase the frequency of good behavior and decrease bad behavior. Student behavior may be maintained by attention; if students have a history of getting attention after misbehavior, they may continue this behavior as long as it continues to get attention. If student misbehavior is ignored, but good behavior results in attention, students may instead behave appropriately to acquire attention. There are however also studies showing that ignoring problematic student behavior, such as bullying other students, can be perceived as tacit approval by the perpetrators and might exacerbate their behavior.
See also
Behavior management
Behavioral engineering
Child development
Delaney card
Educational psychology
References
https://www.webpages.uidaho.edu/cte492/Modules/M6/Top10-Tips-ClassroomDispline_Mgmt.pdf
External links
Contemporary Educational Psychology/Chapter 7: Classroom Management and the Learning Environment
management | 0.764281 | 0.994937 | 0.760412 |
Enactivism | Enactivism is a position in cognitive science that argues that cognition arises through a dynamic interaction between an acting organism and its environment. It claims that the environment of an organism is brought about, or enacted, by the active exercise of that organism's sensorimotor processes. "The key point, then, is that the species brings forth and specifies its own domain of problems ...this domain does not exist "out there" in an environment that acts as a landing pad for organisms that somehow drop or parachute into the world. Instead, living beings and their environments stand in relation to each other through mutual specification or codetermination" (p. 198). "Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning ...engaging in transformational and not merely informational interactions: they enact a world." These authors suggest that the increasing emphasis upon enactive terminology presages a new era in thinking about cognitive science. How the actions involved in enactivism relate to age-old questions about free will remains a topic of active debate.
The term 'enactivism' is close in meaning to 'enaction', defined as "the manner in which a subject of perception creatively matches its actions to the requirements of its situation". The introduction of the term enaction in this context is attributed to Francisco Varela, Evan Thompson, and Eleanor Rosch in The Embodied Mind (1991), who proposed the name to "emphasize the growing conviction that cognition is not the representation of a pre-given world by a pre-given mind but is rather the enactment of a world and a mind on the basis of a history of the variety of actions that a being in the world performs". This was further developed by Thompson and others, to place emphasis upon the idea that experience of the world is a result of mutual interaction between the sensorimotor capacities of the organism and its environment. However, some writers maintain that there remains a need for some degree of the mediating function of representation in this new approach to the science of the mind.
The initial emphasis of enactivism upon sensorimotor skills has been criticized as "cognitively marginal", but it has been extended to apply to higher level cognitive activities, such as social interactions. "In the enactive view,... knowledge is constructed: it is constructed by an agent through its sensorimotor interactions with its environment, co-constructed between and within living species through their meaningful interaction with each other. In its most abstract form, knowledge is co-constructed between human individuals in socio-linguistic interactions...Science is a particular form of social knowledge construction...[that] allows us to perceive and predict events beyond our immediate cognitive grasp...and also to construct further, even more powerful scientific knowledge."
Enactivism is closely related to situated cognition and embodied cognition, and is presented as an alternative to cognitivism, computationalism, and Cartesian dualism.
Philosophical aspects
Enactivism is one of a cluster of related theories sometimes known as the 4Es. As described by Mark Rowlands, mental processes are:
Embodied involving more than the brain, including a more general involvement of bodily structures and processes.
Embedded functioning only in a related external environment.
Enacted involving not only neural processes, but also things an organism does.
Extended into the organism's environment.
Enactivism proposes an alternative to dualism as a philosophy of mind, in that it emphasises the interactions between mind, body and the environment, seeing them all as inseparably intertwined in mental processes. The self arises as part of the process of an embodied entity interacting with the environment in precise ways determined by its physiology. In this sense, individuals can be seen to "grow into" or arise from their interactive role with the world.
"Enaction is the idea that organisms create their own experience through their actions. Organisms are not passive receivers of input from the environment, but are actors in the environment such that what they experience is shaped by how they act."
In The Tree of Knowledge Maturana & Varela proposed the term enactive "to evoke the view of knowledge that what is known is brought forth, in contraposition to the more classical views of either cognitivism or connectionism. They see enactivism as providing a middle ground between the two extremes of representationalism and solipsism. They seek to "confront the problem of understanding how our existence-the praxis of our living- is coupled to a surrounding world which appears filled with regularities that are at every instant the result of our biological and social histories.... to find a via media: to understand the regularity of the world we are experiencing at every moment, but without any point of reference independent of ourselves that would give certainty to our descriptions and cognitive assertions. Indeed the whole mechanism of generating ourselves, as describers and observers tells us that our world, as the world which we bring forth in our coexistence with others, will always have precisely that mixture of regularity and mutability, that combination of solidity and shifting sand, so typical of human experience when we look at it up close."[Tree of Knowledge, p. 241] Another important notion relating to enactivism is autopoiesis. The word refers to a system that is able to reproduce and maintain itself. Maturana & Varela describe that "This was a word without a history, a word that could directly mean what takes place in the dynamics of the autonomy proper to living systems" Using the term autopoiesis, they argue that any closed system that has autonomy, self-reference and self-construction (or, that has autopoietic activities) has cognitive capacities. Therefore, cognition is present in all living systems. This view is also called autopoietic enactivism.
Radical enactivism is another form of enactivist view of cognition. Radical enactivists often adopt a thoroughly non-representational, enactive account of basic cognition. Basic cognitive capacities mentioned by Hutto and Myin include perceiving, imagining and remembering. They argue that those forms of basic cognition can be explained without positing mental representations. With regard to complex forms of cognition such as language, they think mental representations are needed, because there needs explanations of content. In human being's public practices, they claim that "such intersubjective practices and sensitivity to the relevant norms comes with the mastery of the use of public symbol systems" (2017, p. 120), and so "as it happens, this appears only to have occurred in full form with construction of sociocultural cognitive niches in the human lineage" (2017, p. 134). They conclude that basic cognition as well as cognition in simple organisms such as bacteria are best characterized as non-representational.
Enactivism also addresses the hard problem of consciousness, referred to by Thompson as part of the explanatory gap in explaining how consciousness and subjective experience are related to brain and body. "The problem with the dualistic concepts of consciousness and life in standard formulations of the hard problem is that they exclude each other by construction". Instead, according to Thompson's view of enactivism, the study of consciousness or phenomenology as exemplified by Husserl and Merleau-Ponty is to complement science and its objectification of the world. "The whole universe of science is built upon the world as directly experienced, and if we want to subject science itself to rigorous scrutiny and arrive at a precise assessment of its meaning and scope, we must begin by reawakening the basic experience of the world of which science is the second-order expression" (Merleau-Ponty, The phenomenology of perception as quoted by Thompson, p. 165). In this interpretation, enactivism asserts that science is formed or enacted as part of humankind's interactivity with its world, and by embracing phenomenology "science itself is properly situated in relation to the rest of human life and is thereby secured on a sounder footing."
Enaction has been seen as a move to conjoin representationalism with phenomenalism, that is, as adopting a constructivist epistemology, an epistemology centered upon the active participation of the subject in constructing reality. However, 'constructivism' focuses upon more than a simple 'interactivity' that could be described as a minor adjustment to 'assimilate' reality or 'accommodate' to it. Constructivism looks upon interactivity as a radical, creative, revisionist process in which the knower constructs a personal 'knowledge system' based upon their experience and tested by its viability in practical encounters with their environment. Learning is a result of perceived anomalies that produce dissatisfaction with existing conceptions.
Shaun Gallagher also points out that pragmatism is a forerunner of enactive and extended approaches to cognition. According to him, enactive conceptions of cognition can be found in many pragmatists such as Charles Sanders Peirce and John Dewey. For example, Dewey says that "The brain is essentially an organ for effecting the reciprocal adjustment to each other of the stimuli received from the environment and responses directed upon it" (1916, pp. 336–337). This view is fully consistent with enactivist arguments that cognition is not just a matter of brain processes and brain is one part of the body consisting of the dynamical regulation. Robert Brandom, a neo-pragmatist, comments that "A founding idea of pragmatism is that the most fundamental kind of intentionality (in the sense of directedness towards objects) is the practical involvement with objects exhibited by a sentient creature dealing skillfully with its world" (2008, p. 178).
How does constructivism relate to enactivism? From the above remarks it can be seen that Glasersfeld expresses an interactivity between the knower and the known quite acceptable to an enactivist, but does not emphasize the structured probing of the environment by the knower that leads to the "perturbation relative to some expected result" that then leads to a new understanding. It is this probing activity, especially where it is not accidental but deliberate, that characterizes enaction, and invokes affect, that is, the motivation and planning that lead to doing and to fashioning the probing, both observing and modifying the environment, so that "perceptions and nature condition one another through generating one another." The questioning nature of this probing activity is not an emphasis of Piaget and Glasersfeld.
Sharing enactivism's stress upon both action and embodiment in the incorporation of knowledge, but giving Glasersfeld's mechanism of viability an evolutionary emphasis, is evolutionary epistemology. Inasmuch as an organism must reflect its environment well enough for the organism to be able to survive in it, and to be competitive enough to be able to reproduce at sustainable rate, the structure and reflexes of the organism itself embody knowledge of its environment. This biology-inspired theory of the growth of knowledge is closely tied to universal Darwinism, and is associated with evolutionary epistemologists such as Karl Popper, Donald T. Campbell, Peter Munz, and Gary Cziko. According to Munz, "an organism is an embodied theory about its environment... Embodied theories are also no longer expressed in language, but in anatomical structures or reflex responses, etc."
One objection to enactive approaches to cognition is the so-called "scale-up objection". According to this objection, enactive theories only have limited value because they cannot "scale up" to explain more complex cognitive capacities like human thoughts. Those phenomena are extremely difficult to explain without positing representation. But recently, some philosophers are trying to respond to such objection. For example, Adrian Downey (2020) provides a non-representational account of Obsessive-compulsive disorder, and then argues that ecological-enactive approaches can respond to the "scaling up" objection.
Psychological aspects
McGann & others argue that enactivism attempts to mediate between the explanatory role of the coupling between cognitive agent and environment and the traditional emphasis on brain mechanisms found in neuroscience and psychology. In the interactive approach to social cognition developed by De Jaegher & others, the dynamics of interactive processes are seen to play significant roles in coordinating interpersonal understanding, processes that in part include what they call participatory sense-making. Recent developments of enactivism in the area of social neuroscience involve the proposal of The Interactive Brain Hypothesis where social cognition brain mechanisms, even those used in non-interactive situations, are proposed to have interactive origins.
Enactive views of perception
In the enactive view, perception "is not conceived as the transmission of information but more as an exploration of the world by various means. Cognition is not tied into the workings of an 'inner mind', some cognitive core, but occurs in directed interaction between the body and the world it inhabits."
Alva Noë in advocating an enactive view of perception sought to resolve how we perceive three-dimensional objects, on the basis of two-dimensional input. He argues that we perceive this solidity (or 'volumetricity') by appealing to patterns of sensorimotor expectations. These arise from our agent-active 'movements and interaction' with objects, or 'object-active' changes in the object itself. The solidity is perceived through our expectations and skills in knowing how the object's appearance would change with changes in how we relate to it. He saw all perception as an active exploration of the world, rather than being a passive process, something which happens to us.
Noë's idea of the role of 'expectations' in three-dimensional perception has been opposed by several philosophers, notably by Andy Clark. Clark points to difficulties of the enactive approach. He points to internal processing of visual signals, for example, in the ventral and dorsal pathways, the two-streams hypothesis. This results in an integrated perception of objects (their recognition and location, respectively) yet this processing cannot be described as an action or actions. In a more general criticism, Clark suggests that perception is not a matter of expectations about sensorimotor mechanisms guiding perception. Rather, although the limitations of sensorimotor mechanisms constrain perception, this sensorimotor activity is drastically filtered to fit current needs and purposes of the organism, and it is these imposed 'expectations' that govern perception, filtering for the 'relevant' details of sensorimotor input (called "sensorimotor summarizing").
These sensorimotor-centered and purpose-centered views appear to agree on the general scheme but disagree on the dominance issue – is the dominant component peripheral or central. Another view, the closed-loop perception one, assigns equal a-priori dominance to the peripheral and central components. In closed-loop perception, perception emerges through the process of inclusion of an item in a motor-sensory-motor loop, i.e., a loop (or loops) connecting the peripheral and central components that are relevant to that item. The item can be a body part (in which case the loops are in steady-state) or an external object (in which case the loops are perturbed and gradually converge to a steady state). These enactive loops are always active, switching dominance by the need.
Another application of enaction to perception is analysis of the human hand. The many remarkably demanding uses of the hand are not learned by instruction, but through a history of engagements that lead to the acquisition of skills. According to one interpretation, it is suggested that "the hand [is]...an organ of cognition", not a faithful subordinate working under top-down instruction, but a partner in a "bi-directional interplay between manual and brain activity." According to Daniel Hutto: "Enactivists are concerned to defend the view that our most elementary ways of engaging with the world and others - including our basic forms of perception and perceptual experience - are mindful in the sense of being phenomenally charged and intentionally directed, despite being non-representational and content-free." Hutto calls this position 'REC' (Radical Enactive Cognition): "According to REC, there is no way to distinguish neural activity that is imagined to be genuinely content involving (and thus truly mental, truly cognitive) from other non-neural activity that merely plays a supporting or enabling role in making mind and cognition possible."
Participatory sense-making
Hanne De Jaegher and Ezequiel Di Paolo (2007) have extended the enactive concept of sense-making into the social domain. The idea takes as its departure point the process of interaction between individuals in a social encounter. De Jaegher and Di Paolo argue that the interaction process itself can take on a form of autonomy (operationally defined). This allows them to define social cognition as the generation of meaning and its transformation through interacting individuals.
The notion of participatory sense-making has led to the proposal that interaction processes can sometimes play constitutive roles in social cognition (De Jaegher, Di Paolo, Gallagher, 2010). It has been applied to research in social neuroscience and autism.
In a similar vein, "an inter-enactive approach to agency holds that the behavior of agents in a social situation unfolds not only according to their individual abilities and goals, but also according to the conditions and constraints imposed by the autonomous dynamics of the interaction process itself". According to Torrance, enactivism involves five interlocking themes related to the question "What is it to be a (cognizing, conscious) agent?" It is:
1. to be a biologically autonomous (autopoietic) organism
2. to generate significance or meaning, rather than to act via...updated internal representations of the external world
3. to engage in sense-making via dynamic coupling with the environment
4. to 'enact' or 'bring forth' a world of significances by mutual co-determination of the organism with its enacted world
5. to arrive at an experiential awareness via lived embodiment in the world.
Torrance adds that "many kinds of agency, in particular the agency of human beings, cannot be understood separately from understanding the nature of the interaction that occurs between agents." That view introduces the social applications of enactivism. "Social cognition is regarded as the result of a special form of action, namely social interaction...the enactive approach looks at the circular dynamic within a dyad of embodied agents."
In cultural psychology, enactivism is seen as a way to uncover cultural influences upon feeling, thinking and acting. Baerveldt and Verheggen argue that "It appears that seemingly natural experience is thoroughly intertwined with sociocultural realities." They suggest that the social patterning of experience is to be understood through enactivism, "the idea that the reality we have in common, and in which we find ourselves, is neither a world that exists independently from us, nor a socially shared way of representing such a pregiven world, but a world itself brought forth by our ways of communicating and our joint action....The world we inhabit is manufactured of 'meaning' rather than 'information'.
Luhmann attempted to apply Maturana and Varela's notion of autopoiesis to social systems. "A core concept of social systems theory is derived from biological systems theory: the concept of autopoiesis. Chilean biologist Humberto Maturana come up with the concept to explain how biological systems such as cells are a product of their own production." "Systems exist by way of operational closure and this means that they each construct themselves and their own realities."
Educational aspects
The first definition of enaction was introduced by psychologist Jerome Bruner, who introduced enaction as 'learning by doing' in his discussion of how children learn, and how they can best be helped to learn. He associated enaction with two other ways of knowledge organization: Iconic and Symbolic.
"Any domain of knowledge (or any problem within that domain of knowledge) can be represented in three ways: by a set of actions appropriate for achieving a certain result (enactive representation); by a set of summary images or graphics that stand for a concept without defining it fully (iconic representation); and by a set of symbolic or logical propositions drawn from a symbolic system that is governed by rules or laws for forming and transforming propositions (symbolic representation)"
The term 'enactive framework' was elaborated upon by Francisco Varela and Humberto Maturana.
Sriramen argues that enactivism provides "a rich and powerful explanatory theory for learning and being." and that it is closely related to both the ideas of cognitive development of Piaget, and also the social constructivism of Vygotsky. Piaget focused on the child's immediate environment, and suggested cognitive structures like spatial perception emerge as a result of the child's interaction with the world. According to Piaget, children construct knowledge, using what they know in new ways and testing it, and the environment provides feedback concerning the adequacy of their construction. In a cultural context, Vygotsky suggested that the kind of cognition that can take place is not dictated by the engagement of the isolated child, but is also a function of social interaction and dialogue that is contingent upon a sociohistorical context. Enactivism in educational theory "looks at each learning situation as a complex system consisting of teacher, learner, and context, all of which frame and co-create the learning situation." Enactivism in education is very closely related to situated cognition, which holds that "knowledge is situated, being in part a product of the activity, context, and culture in which it is developed and used." This approach challenges the "separating of what is learned from how it is learned and used."
Artificial intelligence aspects
The ideas of enactivism regarding how organisms engage with their environment have interested those involved in robotics and man-machine interfaces. The analogy is drawn that a robot can be designed to interact and learn from its environment in a manner similar to the way an organism does, and a human can interact with a computer-aided design tool or data base using an interface that creates an enactive environment for the user, that is, all the user's tactile, auditory, and visual capabilities are enlisted in a mutually explorative engagement, capitalizing upon all the user's abilities, and not at all limited to cerebral engagement. In these areas it is common to refer to affordances as a design concept, the idea that an environment or an interface affords opportunities for enaction, and good design involves optimizing the role of such affordances.
The activity in the AI community has influenced enactivism as a whole. Referring extensively to modeling techniques for evolutionary robotics by Beer, the modeling of learning behavior by Kelso, and to modeling of sensorimotor activity by Saltzman, McGann, De Jaegher, and Di Paolo discuss how this work makes the dynamics of coupling between an agent and its environment, the foundation of enactivism, "an operational, empirically observable phenomenon." That is, the AI environment invents examples of enactivism using concrete examples that, although not as complex as living organisms, isolate and illuminate basic principles.
Mathematical formalisms
Enactive cognition has been formalised in order to address subjectivity in artificial general intelligence.
A mathematical formalism of AGI is an agent proven to maximise a measure of intelligence. Prior to 2022, the only such formalism was AIXI, which maximised “the ability to satisfy goals in a wide range of environments”. In 2015 Jan Lieke and Marcus Hutter showed that "Legg-Hutter intelligence is measured with respect to a fixed UTM. AIXI is the most intelligent policy if it uses the same UTM", a result which "undermines all existing optimality properties for AIXI", rendering them subjective.
Criticism
One of the essential theses of this approach is that biological systems generate meanings, engaging in transformational and not merely informational interactions. Since this thesis raised the problems of beginning cognition for organisms in the developmental stage of only simple reflexes (the binding problem and the problem of primary data entry), enactivists proposed the concept of embodied information that serves to start cognition. However, critics highlight that this idea requires introducing the nature of intentionality before engaging embodied information. In a natural environment, the stimulus-reaction pair (causation) is unpredictable due to many irrelevant stimuli claiming to be randomly associated with the embodied information. While embodied information is only beneficial when intentionality is already in place, enactivists introduced the notion of the generation of meanings by biological systems (engaging in transformational interactions) without introducing a neurophysiological basis of intentionality.
See also
Action-specific perception
Autopoesis
Biosemiotics
Cognitive science
Cognitive psychology
Computational theory of mind
Connectivism
Cultural psychology
Distributed cognition
Embodied cognition
Embodied embedded cognition
Enactive interfaces
Extended cognition
Extended mind
Externalism#Enactivism and embodied cognition
Mind–body problem
Phenomenology (philosophy)
Practopoiesis
Representationalism
Situated cognition
Social cognition
References
Further reading
Di Paolo, E. A., Rohde, M. and De Jaegher, H., (2010). Horizons for the Enactive Mind: Values, Social Interaction, and Play. In J. Stewart, O. Gapenne and E. A. Di Paolo (eds), Enaction: Towards a New Paradigm for Cognitive Science, Cambridge, MA: MIT Press, pp. 33 – 87.
Gallagher, Shaun (2017). Enactivist Interventions: Rethinking the Mind. Oxford University Press.
Hutto, D. D. (Ed.) (2006). Radical Enactivism: Intentionality, phenomenology, and narrative. In R. D. Ellis & N. Newton (Series Eds.), Consciousness & Emotion, vol. 2.
McGann, M. & Torrance, S. (2005). Doing it and meaning it (and the relationship between the two). In R. D. Ellis & N. Newton, Consciousness & Emotion, vol. 1: Agency, conscious choice, and selective perception. Amsterdam: John Benjamins.
Merleau-Ponty, Maurice (2005). Phenomenology of Perception. Routledge. (Originally published 1945)
Noë, Alva (2010). Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness. Hill and Wang.
(fr) Domenico Masciotra (2023). Une approche énactive des formations, Théorie et Méthode. En devenir compétent et connaisseur. ASCAR Inc.
Notes
External links
Slides related to a chapter on haptic perception (recognition through touch):
An overview of the rationale and means and methods for the study of representations that the learner constructs in his/her attempt to understand knowledge in a given field. See in particular §1.2.1.4 Toward social representations (p. 24)
An extensive but uncritical introduction to the work of Francisco Varela and Humberto Maturana
Entire journal issue on enactivism's status and current debates.
Action (philosophy)
Behavioral neuroscience
Cognitive science
Consciousness
Educational psychology
Emergence
Epistemology of science
Knowledge representation
Metaphysics of mind
Motor cognition
Neuropsychology
Philosophy of perception
Philosophical theories
Philosophy of psychology
Psychological concepts
Psychological theories
Sociology of knowledge | 0.769078 | 0.98872 | 0.760403 |
Cooperative principle | In social science generally and linguistics specifically, the cooperative principle describes how people achieve effective conversational communication in common social situations—that is, how listeners and speakers act cooperatively and mutually accept one another to be understood in a particular way.
The philosopher of language Paul Grice introduced the concept in his pragmatic theory:
In other words: say what you need to say, when you need to say it, and how it should be said. These are Grice's four maxims of conversation or Gricean maxims: quantity, quality, relation, and manner. They describe the rules followed by people in conversation. Applying the Gricean maxims is a way to explain the link between utterances and what is understood from them.
Though phrased as a prescriptive command, the principle is intended as a description of how people normally behave in conversation. Lesley Jeffries and Daniel McIntyre (2010) describe Grice's maxims as "encapsulating the assumptions that we prototypically hold when we engage in conversation." The assumption that the maxims will be followed helps to interpret utterances that seem to flout them on a surface level; such flouting often signals unspoken implicatures that add to the meaning of the utterance.
Grice's maxims
The concept of the cooperative principle was introduced by the linguist Paul Grice in his pragmatic theory. Grice researched the ways in which people derive meaning from language. In his essay Logic and Conversation (1975) and book Studies in the Way of Words (1989), Grice outlined four key categories, or maxims, of conversation—quantity, quality, relation, and manner—under which there are more specific maxims and sub-maxims.
These describe specific rational principles observed by people who follow the cooperative principle in pursuit of effective communication. Applying the Gricean maxims is therefore a way to explain the link between utterances and what is understood from them.
According to Grice:Our talk exchanges do not normally consist of a succession of disconnected remarks, and would not be rational if they did. They are characteristically, to some degree at least, cooperative efforts; and each participant recognizes in them, to some extent, a common purpose or set of purposes, or at least a mutually accepted direction.
This purpose or direction may be fixed from the start (e.g., by an initial proposal of a question for discussion), or it may evolve during the exchange; it may be fairly definite, or it may be so indefinite as to leave very considerable latitude to the participants (as in a casual conversation). But at each stage, some possible conversational moves would be excluded as conversationally unsuitable.
We might then formulate a rough general principle which participants will be expected (ceteris paribus) to observe, namely: Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged. One might label this the Cooperative Principle. [emphasis added]
On the assumption that some such general principle as this is acceptable, one may perhaps distinguish four categories under one or another of which will fall certain more specific maxims and submaxims, the following of which will, in general, yield results in accordance with the Cooperative Principle. Echoing Kant, I call these categories Quantity, Quality, Relation, and Manner.
Maxim of quantity (informativity)
The maxim of quantity is: be informative.
Submaxims:
Make your contribution as informative as is required (for the current purposes of the exchange).
Do not make your contribution more informative than is required.
In his book, Grice uses the following analogy for this maxim: "If you are assisting me to mend a car, I expect your contribution to be neither more nor less than is required. If, for example, at a particular stage I need four screws, I expect you to hand me four, rather than two or six."
Maxim of quality (truth)
The maxim of quality is: be truthful.
Supermaxim:
Try to make your contribution one that is true.
Submaxims:
Do not say what you believe is false.
Do not say that for which you lack adequate evidence.
In his book, Grice uses the following analogy for this maxim: "I expect your contributions to be genuine and not spurious. If I need sugar as an ingredient in the cake you are assisting me to make, I do not expect you to hand me salt; if I need a spoon, I do not expect a trick spoon made of rubber."
Maxim of relation (relevance)
The maxim of relation is: be relevant: the information provided should be relevant to the current exchange and omit any irrelevant information.
In his book, Grice uses the following analogy for this maxim: "I expect a partner's contribution to be appropriate to the immediate needs at each stage of the transaction. If I am mixing ingredients for a cake, I do not expect to be handed a good book, or even an oven cloth (though this might be an appropriate contribution at a later stage)."
With respect to this maxim, Grice writes,Though the maxim itself is terse, its formulation conceals a number of problems that exercise me a good deal: questions about what different kinds and focuses of relevance there may be, how these shift in the course of a talk exchange, how to allow for the fact that subjects of conversations are legitimately changed, and so on. I find the treatment of such questions exceedingly difficult, and I hope to revert to them in later work.
Maxim of manner (clarity)
The maxim of manner is: be clear. Whereas the previous maxims are primarily concerned with what is said, the maxims of manner are concerned with how what is said is said.
Supermaxim:
Be perspicuous.
Submaxims:
Avoid obscurity of expression — i.e., avoid language that is difficult to understand.
Avoid ambiguity — i.e., avoid language that can be interpreted in multiple ways.
Be brief — i.e., avoid unnecessary verbosity.
Be orderly — i.e., provide information in an order that makes sense, and makes it easy for the recipient to process it.
Maxims in practice
Often the addressee of an utterance can add to the overt, surface meaning of a sentence by assuming the speaker has obeyed the maxims. Such additional meanings, if intended by the speaker, are called conversational implicatures. For example, in the exchange
A (to passer by): I am out of gas.
B: There is a gas station round the corner.
A will assume that B obeyed the maxim of relation. However, B's answer is only relevant to A if the gas station is open; so it has the implicature "The gas station is open."
Grice did not, however, assume that all people should constantly follow these maxims. Instead, he found it interesting when these were not respected, namely either flouted (with the listener being expected to be able to understand the message) or violated (with the listener being expected to not note this). Flouting means that the circumstances lead us to think that the speaker is nonetheless obeying the cooperative principle, and the maxims are followed on some deeper level, again yielding a conversational implicature. The importance is in what was not said. For example, answering "Are you interested in a game of tennis?" with "It's raining" only disrespects the maxim of relation on the surface; the reasoning behind this utterance is normally clear to the interlocutor.
Flouting the maxims
It is possible to flout a maxim and thereby convey a different meaning than what is literally said. Often in conversation, a speaker flouts a maxim to produce a negative pragmatic effect, as with sarcasm or irony. One can flout the maxim of quality to tell a clumsy friend who has just taken a bad fall that his gracefulness is impressive and obviously mean the complete opposite. Likewise, flouting the maxim of quantity may result in ironic understatement, the maxim of relevance in blame by irrelevant praise, and the maxim of manner in ironic ambiguity. The Gricean maxims are therefore often purposefully flouted by comedians and writers, who may hide the complete truth and choose their words for the effect of the story and the sake of the reader's experience.
Speakers who deliberately flout the maxims usually intend for their listener to understand their underlying implicature. In the case of the clumsy friend, he will most likely understand that the speaker is not truly offering a compliment. Therefore, cooperation is still taking place, but no longer on the literal level. When speakers flout a maxim, they still do so with the aim of expressing some thought. Thus, the Gricean maxims serve a purpose both when they are followed and when they are flouted.
Violating the maxims
Similar to paltering, violating a maxim means that the speaker is either outright lying by violating the maxim of quality, or being intentionally misleading by violating another maxim. For example, if there was not in fact a gas station around the corner in the example statement above and B was just playing a cruel prank, then B is violating the maxim of quality. A speaker violating the maxim of relevance might imply some fact is important when it is not; warning a cook that it takes a considerable length of time to heat the oven implies that preheating the oven is helpful and should be done, but perhaps the speaker knows the recipe does not actually involve baking anything. Violating the maxim of quantity can involve intentionally including useless details in an attempt to obscure or distract, or via telling half-truths that leave off important details such as the gas station being abandoned and not in operation anymore.
Criticism
Grice's theory is often disputed by arguing that cooperative conversation, like most social behaviour, is culturally determined, and therefore the Gricean maxims and the cooperative principle do not universally apply because of cultural differences. Keenan (1976) claims, for example, that the Malagasy people follow a completely opposite cooperative principle to achieve conversational cooperation. In their culture, speakers are reluctant to share information and flout the maxim of quantity by evading direct questions and replying on incomplete answers because of the risk of losing face by committing oneself to the truth of the information, as well as the fact that having information is a form of prestige. To push back on this point, Harnish (1976) points out that Grice only claims his maxims hold in conversations where the cooperative principle is in effect. The Malagasy speakers choose not to be cooperative, valuing the prestige of information ownership more highly. (It could also be said in this case that this is a less cooperative communication system, since less information is shared.)
Some argue that the maxims are vague. This may explain the criticism that the Gricean maxims can easily be misinterpreted to be a guideline for etiquette, instructing speakers on how to be moral, polite conversationalists. However, the Gricean maxims, despite their wording, are only meant to describe the commonly accepted traits of successful cooperative communication. Geoffrey Leech introduced the politeness maxims: tact, generosity, approbation, modesty, agreement, and sympathy.
It has also been noted by relevance theorists that conversational implicatures can arise in uncooperative situations, which cannot be accounted for in Grice's framework. For example, assume that A and B are planning a holiday in France and A suggests they visit their old acquaintance Gérard; and further, that B knows where Gérard lives, and A knows that B knows. The following dialogue ensues:
A: Where does Gérard live?
B: Somewhere in the South of France.
This is understood by A as B not wanting to say where exactly Gérard lives, precisely because B is not following the cooperative principle.
See also
Information manipulation theory
Lexical entrainment
Politeness theory
Question under discussion
Relevance theory
References
Bibliography
Grice, Paul (1975). "Logic and Conversation." Pp. 41–58 in Syntax and Semantics 3: Speech Acts, edited by P. Cole and J. J. Morgan. New York, NY: Academic Press.
External links
Argues that the Gricean maxims are too vague to be useful for natural language processing.
Pragmatics
Philosophy of language
Principles
es:Pragmática conversacional
it:Massime conversazionali
no:Konversasjonsnormer
fi:Keskustelun periaatteet | 0.764246 | 0.994919 | 0.760364 |
Complex adaptive system | A complex adaptive system is a system that is complex in that it is a dynamic network of interactions, but the behavior of the ensemble may not be predictable according to the behavior of the components. It is adaptive in that the individual and collective behavior mutate and self-organize corresponding to the change-initiating micro-event or collection of events. It is a "complex macroscopic collection" of relatively "similar and partially connected micro-structures" formed in order to adapt to the changing environment and increase their survivability as a macro-structure. The Complex Adaptive Systems approach builds on replicator dynamics.
The study of complex adaptive systems, a subset of nonlinear dynamical systems, is an interdisciplinary matter that attempts to blend insights from the natural and social sciences to develop system-level models and insights that allow for heterogeneous agents, phase transition, and emergent behavior.
Overview
The term complex adaptive systems, or complexity science, is often used to describe the loosely organized academic field that has grown up around the study of such systems. Complexity science is not a single theory—it encompasses more than one theoretical framework and is interdisciplinary, seeking the answers to some fundamental questions about living, adaptable, changeable systems. Complex adaptive systems may adopt hard or softer approaches. Hard theories use formal language that is precise, tend to see agents as having tangible properties, and usually see objects in a behavioral system that can be manipulated in some way. Softer theories use natural language and narratives that may be imprecise, and agents are subjects having both tangible and intangible properties. Examples of hard complexity theories include Complex Adaptive Systems (CAS) and Viability Theory, and a class of softer theory is Viable System Theory. Many of the propositional consideration made in hard theory are also of relevance to softer theory. From here on, interest will now center on CAS.
The study of CAS focuses on complex, emergent and macroscopic properties of the system. John H. Holland said that CAS "are systems that have a large numbers of components, often called agents, that interact and adapt or learn."
Typical examples of complex adaptive systems include: climate; cities; firms; markets; governments; industries; ecosystems; social networks; power grids; animal swarms; traffic flows; social insect (e.g. ant) colonies; the brain and the immune system; and the cell and the developing embryo. Human social group-based endeavors, such as political parties, communities, geopolitical organizations, war, and terrorist networks are also considered CAS. The internet and cyberspace—composed, collaborated, and managed by a complex mix of human–computer interactions, is also regarded as a complex adaptive system. CAS can be hierarchical, but more often exhibit aspects of "self-organization".
The term complex adaptive system was coined in 1968 by sociologist Walter F. Buckley who proposed a model of cultural evolution which regards psychological and socio-cultural systems as analogous with biological species. In the modern context, complex adaptive system is sometimes linked to memetics, or proposed as a reformulation of memetics. Michael D. Cohen and Robert Axelrod however argue the approach is not social Darwinism or sociobiology because, even though the concepts of variation, interaction and selection can be applied to modelling 'populations of business strategies', for example, the detailed evolutionary mechanisms are often distinctly unbiological. As such, complex adaptive system is more similar to Richard Dawkins's idea of replicators.
General properties
What distinguishes a CAS from a pure multi-agent system (MAS) is the focus on top-level properties and features like self-similarity, complexity, emergence and self-organization. A MAS is defined as a system composed of multiple interacting agents; whereas in CAS, the agents as well as the system are adaptive and the system is self-similar. A CAS is a complex, self-similar collectivity of interacting, adaptive agents. Complex Adaptive Systems are characterized by a high degree of adaptive capacity, giving them resilience in the face of perturbation.
Other important properties are adaptation (or homeostasis), communication, cooperation, specialization, spatial and temporal organization, and reproduction. They can be found on all levels: cells specialize, adapt and reproduce themselves just like larger organisms do. Communication and cooperation take place on all levels, from the agent to the system level. The forces driving co-operation between agents in such a system, in some cases, can be analyzed with game theory.
Characteristics
Some of the most important characteristics of complex adaptive systems are:
The number of elements is sufficiently large that conventional descriptions (e.g. a system of differential equations) are not only impractical, but cease to assist in understanding the system. Moreover, the elements interact dynamically, and the interactions can be physical or involve the exchange of information.
Such interactions are rich, i.e. any element or sub-system in the system is affected by and affects several other elements or sub-systems.
The interactions are non-linear: small changes in inputs, physical interactions or stimuli can cause large effects or very significant changes in outputs.
Interactions are primarily but not exclusively with immediate neighbours and the nature of the influence is modulated.
Any interaction can feed back onto itself directly or after a number of intervening stages. Such feedback can vary in quality. This is known as recurrency.
The overall behavior of the system of elements is not predicted by the behavior of the individual elements
Such systems may be open and it may be difficult or impossible to define system boundaries
Complex systems operate under far from equilibrium conditions. There has to be a constant flow of energy to maintain the organization of the system
Agents in the system are adaptive. They update their strategies in response to input from other agents, and the system itself.
Elements in the system may be ignorant of the behaviour of the system as a whole, responding only to the information or physical stimuli available to them locally
Robert Axelrod & Michael D. Cohen identify a series of key terms from a modeling perspective:
Strategy, a conditional action pattern that indicates what to do in which circumstances
Artifact, a material resource that has definite location and can respond to the action of agents
Agent, a collection of properties, strategies & capabilities for interacting with artifacts & other agents
Population, a collection of agents, or, in some situations, collections of strategies
System, a larger collection, including one or more populations of agents and possibly also artifacts
Type, all the agents (or strategies) in a population that have some characteristic in common
Variety, the diversity of types within a population or system
Interaction pattern, the recurring regularities of contact among types within a system
Space (physical), location in geographical space & time of agents and artifacts
Space (conceptual), "location" in a set of categories structured so that "nearby" agents will tend to interact
Selection, processes that lead to an increase or decrease in the frequency of various types of agent or strategies
Success criteria or performance measures, a "score" used by an agent or designer in attributing credit in the selection of relatively successful (or unsuccessful) strategies or agents
Turner and Baker synthesized the characteristics of complex adaptive systems from the literature and tested these characteristics in the context of creativity and innovation. Each of these eight characteristics had been shown to be present in the creativity and innovative processes:
Path dependent: Systems tend to be sensitive to their initial conditions. The same force might affect systems differently.
Systems have a history: The future behavior of a system depends on its initial starting point and subsequent history.
Non-linearity: React disproportionately to environmental perturbations. Outcomes differ from those of simple systems.
Emergence: Each system's internal dynamics affect its ability to change in a manner that might be quite different from other systems.
Irreducible: Irreversible process transformations cannot be reduced back to its original state.
Adaptive/Adaptability: Systems that are simultaneously ordered and disordered are more adaptable and resilient.
Operates between order and chaos: Adaptive tension emerges from the energy differential between the system and its environment.
Self-organizing: Systems are composed of interdependency, interactions of its parts, and diversity in the system.
Modeling and simulation
CAS are occasionally modeled by means of agent-based models and complex network-based models. Agent-based models are developed by means of various methods and tools primarily by means of first identifying the different agents inside the model. Another method of developing models for CAS involves developing complex network models by means of using interaction data of various CAS components.
In 2013 SpringerOpen/BioMed Central launched an online open-access journal on the topic of complex adaptive systems modeling (CASM). Publication of the journal ceased in 2020.
Evolution of complexity
Living organisms are complex adaptive systems. Although complexity is hard to quantify in biology, evolution has produced some remarkably complex organisms. This observation has led to the common misconception of evolution being progressive and leading towards what are viewed as "higher organisms".
If this were generally true, evolution would possess an active trend towards complexity. As shown below, in this type of process the value of the most common amount of complexity would increase over time. Indeed, some artificial life simulations have suggested that the generation of CAS is an inescapable feature of evolution.
However, the idea of a general trend towards complexity in evolution can also be explained through a passive process. This involves an increase in variance but the most common value, the mode, does not change. Thus, the maximum level of complexity increases over time, but only as an indirect product of there being more organisms in total. This type of random process is also called a bounded random walk.
In this hypothesis, the apparent trend towards more complex organisms is an illusion resulting from concentrating on the small number of large, very complex organisms that inhabit the right-hand tail of the complexity distribution and ignoring simpler and much more common organisms. This passive model emphasizes that the overwhelming majority of species are microscopic prokaryotes, which comprise about half the world's biomass and constitute the vast majority of Earth's biodiversity. Therefore, simple life remains dominant on Earth, and complex life appears more diverse only because of sampling bias.
If there is a lack of an overall trend towards complexity in biology, this would not preclude the existence of forces driving systems towards complexity in a subset of cases. These minor trends would be balanced by other evolutionary pressures that drive systems towards less complex states.
See also
Artificial life
Chaos theory
Cognitive science
Command and Control Research Program
Complex system
Computational sociology
Dual-phase evolution
Econophysics
Enterprise systems engineering
Generative sciences
Mean-field game theory
Open system (systems theory)
Santa Fe Institute
Simulated reality
Sociology and complexity science
Super wicked problem
Swarm Development Group
Universal Darwinism
References
Literature
; commissioned as a report by the UK government's Foresight Programme.
Dooley, K., Complexity in Social Science glossary a research training project of the European Commission.
, M.C. (online). Looking to systems theory for a reductive explanation of phenomenal experience and evolutionary foundations for higher order thought Retrieved 15 January 2008.
Hobbs, George & Scheepers, Rens (2010),"Agility in Information Systems: Enabling Capabilities for the IT Function," Pacific Asia Journal of the Association for Information Systems: Vol. 2: Iss. 4, Article 2. Link
External links
Complex Adaptive Systems Group loosely coupled group of scientists and software engineers interested in complex adaptive systems
DNA Wales Research Group Current Research in Organisational change CAS/CES related news and free research data. Also linked to the Business Doctor & BBC documentary series
A description of complex adaptive systems on the Principia Cybernetica Web.
Quick reference single-page description of the 'world' of complexity and related ideas hosted by the Center for the Study of Complex Systems at the University of Michigan.
Complex systems research network
The Open Agent-Based Modeling Consortium
TEDxRotterdam – Igor Nikolic – Complex adaptive systems, and The emergence of universal consciousness: Brendan Hughes at TEDxPretoria . Talks discussing various practical examples of complex adaptive systems, including Wikipedia, star galaxies, genetic mutation, and other examples
Cybernetics
Systems science
Complex systems theory
Management cybernetics | 0.766655 | 0.991751 | 0.760331 |
Multimodality | Multimodality is the application of multiple literacies within one medium. Multiple literacies or "modes" contribute to an audience's understanding of a composition. Everything from the placement of images to the organization of the content to the method of delivery creates meaning. This is the result of a shift from isolated text being relied on as the primary source of communication, to the image being utilized more frequently in the digital age. Multimodality describes communication practices in terms of the textual, aural, linguistic, spatial, and visual resources used to compose messages.
While all communication, literacy, and composing practices are and always have been multimodal, academic and scientific attention to the phenomenon only started gaining momentum in the 1960s. Work by Roland Barthes and others has led to a broad range of disciplinarily distinct approaches. More recently, rhetoric and composition instructors have included multimodality in their coursework. In their position statement on Understanding and Teaching Writing: Guiding Principles, the National Council of Teachers of English state that "'writing' ranges broadly from written language (such as that used in this statement), to graphics, to mathematical notation."
Definition
Although multimodality discourse mentions both medium and mode, these terms are not synonymous. However, their precise extents may overlap depending on how precisely (or not) individual authors and traditions use the terms.
Gunther Kress's scholarship on multimodality is canonical with social semiotic approaches and has considerable influence in many other approaches, such as in writing studies. Kress defines 'mode' in two ways. One: a mode is something that can be socially or culturally shaped to give something meaning. Images, pieces of writing, and speech patterns are all examples of modes. Two: modes are semiotic, shaped by intrinsic characteristics and their potential within their medium, as well as what is required of them by their culture or society.
Thus, every mode has a distinct historical and cultural potential and or limitation for its meaning. For example, if we broke down writing into its modal resources, we would have grammar, vocabulary, and graphic "resources" as the acting modes. Graphic resources can be further broken down into font size, type, color, size, spacing within paragraphs, etc. However, these resources are not deterministic. Instead, modes shape and are shaped by the systems in which they participate. Modes may aggregate into multimodal ensembles and be shaped over time into familiar cultural forms. A good example of this is films, which combine visual modes (in setting and in attire), modes of dramatic action and speech, and modes of music or other sounds. Studies of multimodal work in this field include van Leeuwenvan; Bateman and Schmidt; and Burn and Parker's theory of the Kineikonic Mode.
In social semiotic accounts, a medium is the substance in which meaning is realized and through which it becomes available to others. Mediums include video, image, text, audio, etc. Socially, a medium includes semiotic, sociocultural, and technological practices. Examples include film, newspapers, billboards, radio, television, a classroom, etc. Multimodality also makes use of the electronic medium by creating digital modes with the interlacing of image, writing, layout, speech, and video. Mediums have become modes of delivery that consider the current and future contexts.
History
Multimodality (as a phenomenon) has received increasingly theoretical characterizations throughout the history of communication. Indeed, the phenomenon has been studied at least since the 4th century BC, when classical rhetoricians alluded to it with their emphasis on voice, gesture, and expressions in public speaking. However, the term was not defined with significance until the 20th century. During this time, an exponential rise in technology created many new modes of presentation. Since then, multimodality has become standard in the 21st century, applying to various network-based forms such as art, literature, social media and advertising. The monomodality, or singular mode, which used to define the presentation of text on a page has been replaced with more complex and integrated layouts. John A. Bateman says in his book Multimodality and Genre, "Nowadays… text is just one strand in a complex presentational form that seamlessly incorporates visual aspect 'around,' and sometimes even instead of, the text itself." Multimodality has quickly become "the normal state of human communication."
Expressionism
During the 1960s and 1970s, many writers looked to photography, film, and audiotape recordings in order to discover new ideas about composing. This led to a resurgence of a focus on the sensory, self-illustration known as expressionism. Expressionist ways of thinking encouraged writers to find their voice outside of language by placing it in a visual, oral, spatial, or temporal medium. Donald Murray, who is often linked to expressionist methods of teaching writing once said, "As writers it is important that we move out from that which is within us to what we see, feel, hear, smell, and taste of the world around us. A writer is always making use of experience." Murray instructed his writing students to "see themselves as cameras" by writing down every single visual observation they made for one hour. Expressionist thought emphasized personal growth, and linked the art of writing with all visual art by calling both a type of composition. Also, by making writing the result of a sensory experience, expressionists defined writing as a multisensory experience, and asked for it to have the freedom to be composed across all modes, tailored for all five senses.
Cognitive developments
During the 1970s and 1980s, multimodality was further developed through cognitive research about learning. Jason Palmeri cites researchers such as James Berlin and Joseph Harris as being important to this development; Berlin and Harris studied alphabetic writing and how its composition compared to art, music, and other forms of creativity. Their research had a cognitive approach which studied how writers thought about and planned their writing process. James Berlin declared that the process of composing writing could be directly compared to that of designing images and sound. Furthermore, Joseph Harris pointed out that alphabetic writing is the result of multimodal cognition. Writers often conceptualize their work by non-alphabetic means, through visual imagery, music, and kinesthetic feelings. This idea was reflected in the popular research of Neil D. Fleming, more commonly known as the neuro-linguistic learning styles. Fleming's three styles of auditory, kinesthetic, and visual learning helped to explain the modes in which people were best able to learn, create, and interpret meaning. Other researchers such as Linda Flower and John R. Hayes theorized that alphabetic writing, though it is a principal modality, sometimes could not convey the non-alphabetic ideas a writer wished to express.
Audience
Every text has its own defined audience, and makes rhetorical decisions to improve the audience's reception of that same text. In this same manner, multimodality has evolved to become a sophisticated way to appeal to a text's audience. Relying upon the canons of rhetoric in a different way than before, multimodal texts have the ability to address a larger, yet more focused, intended audience. Multimodality does more than solicit an audience; the effects of multimodality are imbedded in an audience's semiotic, generic and technological understanding.
Psychological effects
The appearance of multimodality, at its most basic level, can change the way an audience perceives information. The most basic understanding of language comes via semiotics – the association between words and symbols. A multimodal text changes its semiotic effect by placing words with preconceived meanings in a new context, whether that context is audio, visual, or digital. This in turn creates a new, foundationally different meaning for an audience. Bezemer and Kress, two scholars on multimodality and semiotics, argue that students understand information differently when text is delivered in conjunction with a secondary medium, such as image or sound, than when it is presented in alphanumeric format only. This is due to it drawing a viewer's attention to "both the originating site and the site of recontextualization". Meaning is moved from one medium to the next, which requires the audience to redefine their semiotic connections. Recontextualizing an original text within other mediums creates a different sense of understanding for the audience, and this new type of learning can be controlled by the types of media used.
Multimodality also can be used to associate a text with a specific argumentative purpose, e.g., to state facts, make a definition, cause a value judgment, or make a policy decision. Jeanne Fahnestock and Marie Secor, professors at the University of Maryland and the Pennsylvania State University, labeled the fulfillment of these purposes stases. A text's stasis can be altered by multimodality, especially when several mediums are juxtaposed to create an individualized experience or meaning. For example, an argument that mainly defines a concept is understood as arguing in the stasis of definition; however, it can also be assigned a stasis of value if the way the definition is delivered equips writers to evaluate a concept, or judge whether something is good or bad. If the text is interactive, the audience is facilitated to create their own meaning from the perspective the multimodal text provides. By emphasizing different stases through the use of different modes, writers are able to further engage their audience in creating comprehension.
Genre effects
Multimodality also obscures an audience's concept of genre by creating gray areas out of what was once black and white. Carolyn R. Miller, a distinguished professor of rhetoric and technical communication at North Carolina State University observed in her genre analysis of the Weblog how genre shifted with the invention of blogs, stating that "there is strong agreement on the central features that make a blog a blog. Miller defines blogs on the basis of their reverse chronology, frequent updating, and combination of links with personal commentary. However, the central features of blogs are obscured when considering multimodal texts. Some features are absent, such the ability for posts to be independent of each other, while others are present. This creates a situation where the genre of multimodal texts is impossible to define; rather, the genre is dynamic, evolutionary and ever-changing.
The delivery of new texts has radically changed along with technological influence. Composition now consists of the anticipation of future remediation. Writers think about the type of audience a text will be written for, and anticipate how that text might be reformed in the future. Jim Ridolfo coined the term rhetorical velocity to explain a conscious concern for the distance, speed, time, and travel it will take for a third party to rewrite an original composition. The use of recomposition allows for an audience to be involved in a public conversation, adding their own intentionality to the original product. This new method of editing and remediation is attributed to the evolution of digital text and publication, giving technology an important role in writing and composition.
Technological effects
Multimodality has evolved along with technology. This evolution has created a new concept of writing, a collaborative context keeping the reader and writer in relationship. The concept of reading is different with the influence of technology due to the desire for a quick transmission of information. In reference to the influence of multimodality on genre and technology, Professor Anne Frances Wysocki expands on how reading as an action has changed in part because of technology reform: "These various technologies offer perspectives for considering and changing approaches we have inherited to composing and interpreting pages....". Along with the interconnectedness of media, computer-based technologies are designed to make new texts possible, influencing rhetorical delivery and audience.
Education
Multimodality in the 21st century has caused educational institutions to consider changing the forms of its traditional aspects of classroom education. With a rise in digital and Internet literacy, new modes of communication are needed in the classroom in addition to print, from visual texts to digital e-books. Rather than replacing traditional literacy values, multimodality augments and increases literacy for educational communities by introducing new forms. According to Miller and McVee, authors of Multimodal Composing in Classrooms, "These new literacies do not set aside traditional literacies. Students still need to know how to read and write, but new literacies are integrated." The learning outcomes of the classroom stay the same, including – but are not limited to – reading, writing, and language skills. However, these learning outcomes are now being presented in new forms as multimodality in the classroom which suggests a shift from traditional media such as paper-based text to more modern media such as screen-based texts. The choice to integrate multimodal forms in the classroom is still controversial within educational communities. The idea of learning has changed over the years and now, some argue, must adapt to the personal and affective needs of new students. In order for classroom communities to be legitimately multimodal, all members of the community must share expectations about what can be done with through integration, requiring a "shift in many educators' thinking about what constitutes literacy teaching and learning in a world no longer bound by print text."
Multiliteracy
Multilteracy is the concept of understanding information through various methods of communication and being proficient in those methods. With the growth of technology, there are more ways to communicate than ever before, making it necessary for our definition of literacy to change in order to better accommodate these new technologies. These new technologies consist of tools such as text messaging, social media, and blogs. However, these modes of communication often employ multiple mediums simultaneously such as audio, video, pictures, and animation. Thus, making content multimodal.
The culmination of these different mediums are what's called content convergence, which has become a cornerstone of multimodal theory. Within our modern digital discourse content has become accessible to many, remixable, and easily spreadable, allowing ideas and information to be consumed, edited, and improved by the general public. An example being Wikipedia, the platform allows free consumption and authorship of its work which in turn facilitates the spread of knowledge through the efforts of a large community. It creates a space in which authorship has become collaborative and the product of said authorship is improved by that collaboration. As distribution of information has grown through this process of content convergence it has become necessary for our understanding of literacy to evolve with it.
The shift away from written text as the sole mode of nonverbal communication has caused the traditional definition of literacy to evolve. While text and image may exist separately, digitally, or in print, their combination gives birth to new forms of literacy and thus, a new idea of what it means to be literate. Text, whether it is academic, social, or for entertainment purposes, can now be accessed in a variety of different ways and edited by several individuals on the Internet. In this way texts that would typically be concrete become amorphous through the process of collaboration. The spoken and written word are not obsolete, but they are no longer the only way to communicate and interpret messages. Many mediums can be used separately and individually. Combining and repurposing one mode of communication for another has contributed to the evolution of different literacies.
Communication is spread across a medium through content convergence, such as a blog post accompanied by images and an embedded video. This idea of combining mediums gives new meaning to the concept of translating a message. The culmination of varying forms of media allows for content to be either reiterated, or supplemented by its parts. This reshaping of information from one mode to another is known as transduction. As information changes from one mode to the next, our comprehension of its message is attributed to multiliteracy. Xiaolo Bao defines three succeeding learning stages that make up multiliteracy. Grammar-Translation Method, Communicative Method, and Task-Based Method. Simply put, they can be described as the fundamental understanding of syntax and its function, the practice of applying that understanding to verbal communication, and lastly, the application of said textual and verbal understandings to hands-on activities. In an experiment conducted by the Canadian Center of Science and Education, students were either placed in a classroom with a multimodal course structure, or a classroom with a standard learning course structure as a control group. Tests were administered throughout the length of the two courses, with the multimodal course concluding in a higher learning success rate, and reportedly higher rate of satisfaction among students. This indicates that applying multimodality to instruction is found to yield overall better results in developing multiliteracy than conventional forms of learning when tested in real-life scenarios.
Classroom literacy
Multimodality in classrooms has brought about the need for an evolving definition of literacy. According to Gunther Kress, a popular theorist of multimodality, literacy usually refers to the combination of letters and words to make messages and meaning and can often be attached to other words in order to express knowledge of the separate fields, such as visual- or computer-literacy. However, as multimodality becomes more common, not only in classrooms, but in work and social environments, the definition of literacy extends beyond the classroom and beyond traditional texts. Instead of referring only to reading and alphabetic writing, or being extended to other fields, literacy and its definition now encompass multiple modes. It has become more than just reading and writing, and now includes visual, technological, and social uses among others.
Georgia Tech's writing and communication program created a definition of multimodality based on the acronym, WOVEN. The acronym explains how communication can be written, oral, visual, electronic, and nonverbal. Communication has multiple modes that can work together to create meaning and understanding. The goal of the program is to ensure students are able to communicate effectively in their everyday lives using various modes and media.
As classroom technologies become more prolific, so do multimodal assignments. Students in the 21st century have more options for communicating digitally, be it texting, blogging, or through social media. This rise in computer-controlled communication has required classes to become multimodal in order to teach students the skills required in the 21st-century work environment. However, in the classroom setting, multimodality is more than just combining multiple technologies, but rather creating meaning through the integration of multiple modes. Students are learning through a combination of these modes, including sound, gestures, speech, images and text. For example, in digital components of lessons, there are often pictures, videos, and sound bites as well as the text to help students grasp a better understanding of the subject. Multimodality also requires that teachers move beyond teaching with just text, as the printed word is only one of many modes students must learn and use.
The application of visual literacy in English classroom can be traced back to 1946 when the instructor's edition of the popular Dick and Jane elementary reader series suggested teaching students to "read pictures as well as words" (p. 15). During the 1960s, a couple of reports issued by the National Council of Teachers of English suggested using television and other mass media such as newspapers, magazines, radio, motion pictures, and comic books in English classroom. The situation is similar in postsecondary writing instruction. Since 1972, visual elements have been incorporated into some popular twentieth-century college writing textbooks like James McCrimmon's Writing with a Purpose.
Higher education
Colleges and universities around the world are beginning to use multimodal assignments to adapt to the technology currently available. Assigning multimodal work also requires professors to learn how to teach multimodal literacy. Implementing multimodality in higher education is being researched to find out the best way to teach and assign multimodal tasks.
Multimodality in the college setting can be seen in an article by Teresa Morell, where she discusses how teaching and learning elicit meaning through modes such as language, speaking, writing, gesturing, and space. The study observes an instructor who conducts a multimodal group activity with students. Previous studies observed different classes using modes such as gestures, classroom space, and PowerPoints. The current study observes an instructors combined use of multiple modes in teaching to see its effect on student participation and conceptual understanding. She explains the different spaces of the classroom, including the authoritative space, interactional space, and personal space. The analysis displays how an instructors multimodal choices involve student participation and understanding. On average the instructor used three to four modes, most often being some kind of gaze, gesture, and speech. He got students to participate by formulating a group definition of cultural stereotypes. It was found that those who are learning a second language depend on more than just spoken and written word for conceptual learning, meaning multimodal education has benefits.
Multimodal assignments involve many aspects other than written words, which may be beyond an instructors education. Educators have been taught how to grade traditional assignments, but not those that utilize links, photos, videos or other modes. Dawn Lombardi is a college professor who admitted to her students that she was a bit "technologically challenged," when assigning a multimodal essay using graphics. The most difficult part regarding these assignments is the assessment. Educators struggle to grade these assignments because the meaning conveyed may not be what the student intended. They must return to the basics of teaching to configure what they want their students to learn, achieve, and demonstrate in order to create criteria for multimodal tasks. Lombardi made grading criteria based on creativity, context, substance, process, and collaboration which was presented to the students prior to beginning the essay.
Another type of visuals-related writing task is visual analysis, especially advertising analysis, which has begun in the 1940s and has been prevalent in postsecondary writing instruction for at least 50 years. This pedagogical practice of visual analysis did not focus on how visuals including images, layout, or graphics are combined or organized to make meanings.
Then, through the following years, the application of visuals in composition classroom has been continually explored and the emphasis has been shifted to the visual features—margins, page layout, font, and size—of composition and its relationship to graphic design, web pages, and digital texts which involve images, layout, color, font, and arrangements of hyperlinks. In line with the New London Group, George (2002) argues that both visual and verbal elements are crucial in multimodal designs.
Acknowledging the importance of both language and visuals in communication and meaning making, Shipka (2005) further advocates for a multimodal, task-based framework in which students are encouraged to use diverse modes and materials—print texts, digital media, videotaped performances, old photographs—and any combinations of them in composing their digital/multimodal texts. Meanwhile, students are provided with opportunities to deliver, receive, and circulate their digital products. In so doing, students can understand how systems of delivery, reception, and circulation interrelate with the production of their work.
Multimodal communities
Multimodality has significance within varying communities, such as the private, public, educational, and social communities. Because of multimodality, the private domain is evolving into a public domain in which certain communities function. Because social environments and multimodality mutually influence each other, each community is evolving in its own way. This evolution is evident in the language, as discussed by Grifoni, D'Ulizia, and Ferri in their work.
Cultural multimodality
Based on these representations, communities decide through social interaction how modes are commonly understood. In the same way, these assumptions and determinations of the way multimodality functions can actually create new cultural and social identities. For example, Bezemer and Kress define modes as "socially and culturally shaped resource[s] for making meaning." According to Bezemer, "In order for something to 'be a mode,' there needs to be a shared cultural sense within a community of a set of resources and how these can be organized to realize meaning."[] Cultures that pull from different or similar resources of knowledge, understanding, and representations will communicate through different or similar modes. Signs, for instance, are visual modes of communication determined by our daily necessities.
In her dissertation, Elizabeth J. Fleitz,a PhD in English with Concentration in Rhetoric and Writing from Bowling Green State University, argues that the cookbook, which she describes as inherently multimodal, is an important feminist rhetorical text. According to Fleitz, women were able to form relationships with other women through communicating in socially acceptable literature like cook books; "As long as the woman fulfills her gender role, little attention is paid to the increasing amount of power she gains in both the private and public spheres." Women who would have been committed to staying at home could become published authors, gaining a voice in a phallogocentric society without being viewed as threats. Women revised and adapted different modes of writing to fit their own needs. According to Cinthia Gannett, author of "Gender and the Journal," diary writing, which evolved from men's journal writing, has "integrate[d] and confirm[ed] women's perceptions of domestic, social, and spiritual life, and invoke a sense of self." It is these methods of remediation that characterize women's literature as multimodal. The recipes inside of the cookbooks also qualify as multimodal. Recipes delivered through any medium, whether that be a cookbook or a blog, can be considered multimodal because of the "interaction between body, experience, knowledge, and memory, multimodal literacies" that all relate to one another to create our understanding of the recipe. Recipe exchanging is an opportunity for networking and social interaction. According to Fleitz, "This interaction is undeniably multimodal, as this network "makes do" with alternative forms of communication outside dominant discursive methods, in order to further and promote women's social and political goals." Cookbooks are only a singular example of the capacity of multimodality to build community identities, but they aptly demonstrate the nuanced aspects of multimodality. Multimodality does not just encompasses tangible components, such as text, images, sound etc., but it also draws from experiences, prior knowledge, and cultural understanding.
Another change that has occurred due to the shift from the private environment to the public is audience construction. In the privacy of the home, the family generally targets a specific audience: family members or friends. Once the photographs become public, an entirely new audience is addressed. As Pauwels notes, "the audience may be ignored, warned and offered apologies for the trivial content, directly addressed relating to personal stories, or greeted as highly appreciated publics that need to be entertained and invited to provide feedback."
Multimodal academic writing practises
In everyday life, multimodal construction and communication of meaning is ubiquitous. However, academic writing has maintained an overwhelming dominance of the linguistic resource up to the present (Blanca, 2015). The need to open the game to other possible forms of writing in the academy lies in the conviction that the semiotic resources used in the processes of academic inquiry and communication have an impact on the findings (Sousanis, 2015), since both processes are linked in the epistemic potential of writing, understood here in multimodal terms. Therefore, the idea is not about "embellishing" academic discourse with illustrative visual resources, but rather about enabling other ways of thinking, new associations; ultimately, new knowledge, arising from the interweaving of various verbal and nonverbal modes. The strategic use of page design, the juxtaposition of text in columns or of text and image, and the use of typography (in type, size, color, etc.) are just a few examples of how the semiotic potential of the genres of academic circulation can be exploited. This is linked to the possibilities of enriching the forms of academic writing by appealing to non-linear textual development in addition to linear, and by tensioning image and text in their infinite possibilities of creating meaning (Mussetta, Siragusa & Vottero, 2020; Lamela Adó & Mussetta, 2020; Mussetta, Lamela Adó & Peixoto, 2021)
Multimodal fiction
There is now an increasing number of fictional narratives that explore and graphically exploit the text and the materiality of the book in its traditional format for the construction of meaning: these are what some critics call multimodal novels (Hallet 2009, p. 129; Gibbons 2012b, p. 421, among others), but which also receive the name of visual or hybrid (Luke 2013, p. 21; Reynolds 1998, p. 169; Sadokierski 2010, p. 7). These narratives include a variety of semiotic resources and modes ranging from the strategic use of different typographies and blank spaces, to the inclusion of drawings, photos, maps and diagrams that do not correspond to the usual notion of illustration, but are an indissoluble part of the plot, with specific functions in their contribution of meaning to the work in its multiple combinations (Mussetta 2014; Mussetta, 2017a; Mussetta, 2017b; Mussetta 2017c; Mussetta, 2020).
Communication in business
In the business sector, multimodality creates opportunities for both internal and external improvements in efficiency. Similar to shifts in education to utilize both textual and visual learning elements, multimodality allows businesses to have better communication. According to Vala Afshar, this transition first started to occur in the 1980s as "technology had become an essential part of business." This level of communication has amplified with the integration of digital media and tools during the 21st century.
Internally, businesses use multimodal platforms for analytical and systemic purposes, among others. Through multimodality, a company enhances its productivity as well as creating transparency for management. Improved employee performance from these practices can correlate with ongoing interactive training and intuitive digital tools.
Multimodality is used externally to increase customer satisfaction by providing multiple platforms during one interaction. With the popularity of with text, chat and social media during the 21st century, most businesses attempt to promote cross-channel engagement. Businesses aim to increase customer experience and solve any potential issue or inquiry quickly. A company's goal with external multimodality centers around better communication in real-time to make customer service more efficient.
Social multimodality
One shift caused by multi-literate environments is that private-sphere texts are being made more public. The private sphere is described as an environment in which people have a sense of personal authority and are distanced from institutions, such as the government. The family and home are considered to be a part of the private sphere. Family photographs are an example of multimodality in this sphere. Families take pictures (sometimes captioning them) and compile them in albums that are generally meant to be displayed to other family members or audiences that the family allows. These once private albums are entering the public environment of the Internet more often due to the rapid development and adoption of technology.
According to Luc Pauwels, a professor of communication studies at the University of Antwerp, Belgium, "the multimedia context of the Web provides private image makers and storytellers with an increasingly flexible medium for the construction and dissemination of fact and fiction about their lives." These relatively new website platforms allow families to manipulate photographs and add text, sound, and other design elements. By using these various modes, families can construct a story of their lives that is presented to a potentially universal audience. Pauwels states that "digitized (and possibly digitally 'adjusted') family snapshots...may reveal more about the immaterial side of family culture: the values, beliefs, and aspirations of a group of people." This immaterial side of the family is better demonstrated through the use of multimodality on the Web because certain events and photographs can take precedence over others based on how they are organized on the site, and other visual or audio components can aid in evoking a message.
Similar to the evolution of family photography into the digital family album is the evolution of the diary into the personal weblog. As North Carolina State University professors, Carolyn Miller and Dawn Shepherd state, "the weblog phenomenon raises a number of rhetorical issues,… [such as] the peculiar intersection of the public and private that weblogs seem to invite." Bloggers have the opportunity to communicate personal material in a public space, using words, images, sounds, etc. As described in the example above, people can create narratives of their lives in this expanding public community. Miller and Shepherd say that "validation increasingly comes through mediation, that is, from the access and attention and intensification that media provide." Bloggers can create a "real" experience for their audience(s) because of the immediacy of the Internet. A "real" experience refers to "perspectival reality, anchored in the personality of the blogger."
Digital applications
Information is presented through the design of digital media, engaging with multimedia to offer a multimodal principle of composition. Standard words and pictures can be presented as moving images and speech in order to enhance the meaning of words. Joddy Murray wrote in "Composing Multimodality" that both discursive rhetoric and non-discursive rhetoric should be examined in order to see the modes and media used to create such composition. Murray also includes the benefits of multimodality, which lends itself to "acknowledge and build into our writing processes the importance of emotions in textual production, consumption, and distribution; encourage digital literacy as well as nondigital literacy in textual practice. Murray shows a new way of thinking about composition, allowing images to be "sensuous and emotional" symbols of what they do represent, not focusing so much on the "conceptual and abstract."
Murray writes in his article, through the use of Richard Lanham's The Electronic World: Democracy, Technology, and the Arts, is an example of multimodality how "discursive text is in the center of everything we do," going on to say how students coexist in a world that "includes blogs, podcasts, modular community web spaces, cell phone messaging…", urging for students to be taught how to compose through rhetorical minds in these new, and not-so-new texts. "Cultural changes, and Lanham suggests, refocuses writing theory towards the image", demonstrating how there is a change in alphabet-to-icon ratios in electronic writing. One of these prime examples can see through the Apple product, the iPhone, in which "emojis" are seen as icons in a separate keyboard to convey what words would have once delivered. Another example is Prezi. Often likened to Microsoft PowerPoint, Prezi is a cloud-based presentation application that allows users to create text, embed video, and make visually aesthetic projects. Prezi's presentations zoom the eye in, out, up and down to create a multi-dimensional appeal. Users also utilize different media within this medium that is itself unique.
Introduction of the Internet
In the 1990s, multimodality grew in scope with the release of the Internet, personal computers, and other digital technologies. The literacy of the emerging generation changed, becoming accustomed to text circulated in pieces, informally, and across multiple mediums of image, color, and sound. The change represented a fundamental shift in how writing was presented: from print-based to screen-based. Literacy evolved so that students arrived in classrooms being knowledgeable in video, graphics, and computer skills, but not alphabetic writing. Educators had to change their teaching practices to include multimodal lessons in order to help students achieve success in writing for the new millennium.
Accessing the audience
In the public sphere, multimedia popularly refers to implementations of graphics in ads, animations and sounds in commercials, and also areas of overlap. One thought process behind this use of multimedia is that, through technology, a larger audience can be reached through the consumption of different technological mediums, or in some cases, as reported in 2010 through the Kaiser Family Foundation, can "help drive increased consumption". This is a drastic change from five years ago: "8–18 year olds devote an average of 7 hours and 38 minutes to using media across a typical day (more than 53 hours a week)." With the possibility of attaining multi-platform social media and digital advertising campaigns, also comes new regulations from the Federal Trade Commission (FTC) on how advertisers can communicate with their consumers via social networks. Because multimodal tools are often tied to social networks, it is important to gauge the consumer in these fair practices. Companies like Burberry Group PLC and Lacoste S.A. (fashion houses for Burberry and Lacoste respectively) engage their consumers via the popular blogging site Tumblr; Publix Supermarkets, Inc. and Jeep engage their consumers via Twitter; celebrities and athletic teams/athletes such as Selena Gomez and The Miami Heat also engage their audience via Facebook through means of fan pages. These examples do not limit the presence of these specific entities to a single medium, but offer a wide variety of what is found for each respective source.
Advertising
Multimedia advertising is the result of animation and graphic designs used to sell products or services. There are various forms of multimedia advertising through videos, online advertising and DVDs, CDs etc. These outlets afford companies the ability to increase their customer base through multimedia advertising. This is a necessary contribution to the marketing of the products and services. For instance, online advertising is a new wave example towards the use of multimedia in advertising that provides many benefits to the online companies and traditional corporations. New technologies today have brought on an evolution of multimedia in advertising and a shift from traditional techniques. The importance of multimedia advertising is significantly increased for companies in their effectiveness to market or sell products and services. Corporate advertising concerns itself with the idea that "Companies are likely to appeal to a broader audience and increase sales through search engine optimization, extensive keyword research, and strategic linking." The concept behind the advertising platform can span across multiple mediums, yet, at its core, be centered around the same scheme.
Coca-Cola ran an overarching "Open Happiness" campaign across multiple media platforms including print ads, web ads, and television commercials. The purpose of this central function was to communicate a common message over multiple platforms to further encourage an audience to buy into a reiterated message. The strength of such multimedia campaigns with multimedia is that it implements all available mediums - any of which could prove successful with a different audience member.
Social media
Social media and digital platforms are ubiquitous in today's everyday life. These platforms do not operate solely based on their original makeup; they utilize media from other technologies and tools to add multidimensionality to what will be created on their own platform. These added modal features create a more interactive experience for the user.
Prior to Web 2.0's emergence, most websites listed information with little to no communication with the reader. Within Web 2.0, social media and digital platforms are utilized towards everyday living for businesses, law offices in advertising, etc. Digital platforms begin with the use of mediums along with other technologies and tools to further enhance and improve what will be created on its own platform.
Hashtags (#topic) and user tags (@username) make use of metadata in order to track "trending" topics and to alert users of their name's use within a post on a social media site. Used by various social media websites (most notably Twitter and Facebook), these features add internal linkage between users and themes. Characteristics of a multimodal feature can be seen through the status update option on Facebook. Status updates combine the affordances of personal blogs, Twitter, instant messaging, and texting in a single feature. The 2013 status update button currently prompts a user, "What's on your mind?" a change from the 2007, "What are you doing right now?" This change was added by Facebook to promote greater flexibility for the user. This multimodal feature allows a user to add text, video, image, links, and tag other users. Twitter's 140 character in a single message microblogging platform allows users the ability to link to other users, websites, and attach pictures. This new media is a platform that is affecting the literacy practice of the current generation by condensing the conversational context of the internet into fewer characters but encapsulating several media.
Other examples include the 'blog,' a term coined in 1999 as a contraction of "web log," the foundation of blogging is often attributed to various people in the mid-to-late '90s. Within the realm of blogging, videos, images, and other media are often added to otherwise text-only entries in order to generate a more multifaceted read.
Gaming
One of the current digital application of multimodality in the field of education has been developed by James Gee through his approach of effective learning through video games. Gee contends that there is a lot of knowledge about learning that schools, workplaces, families, and academics researchers should get from good computer and video games, such as a 'whole set of fundamentally sound learning principles' that can be used in many other domains, for instance when it comes to teaching science in schools.
Storytelling
Another application of multimodality is digital film-making sometimes referred to as 'digital storytelling'. A digital story is defined as a short film that incorporated digital images, video and audio in order to create a personally meaningful narrative. Through this practice, people act as film-makers, using multimodal forms of representation to design, create, and share their life stories or learning stories with specific audience commonly through online platforms. Digital storytelling, as a digital literacy practice, is commonly used in educational settings. It is also used in the media mainstream, considering the increasing number of projects that motivate members of the online community to create and share their digital stories.
Multimodal methods in social science research
Multimodality is also a growing methodology being used in the social sciences. Not only do we see the area of multimodal anthropology, but there is also growing interest in this as a methodology in sociology and management.
For example, management researchers have highlighted the "material and visual turn" in organization research. Going above and beyond the multimodal character of ethnographic research, this growing area of research is interested in going beyond simply textual data as a single mode, for example, going beyond text to understand visual communication modes and issues such as the legitimacy of new ventures. Multimodality might involve spatial, aural, visual, sensual and other data, perhaps with multiple modes embedded in a material object.
Multimodality can be used particularly for meaning construction, for example in institutional theory, multimodal compositions can enhance the perceived validity of particular narratives. Multimodal methods may also be used to deinstitutionalize unsustainable parts of an institution in order to sustain the institution. Beyond institutional theory, we may find "multimodal historical cues" embedded in particular historical practices, highlighting the way organizations may use particular relationships to the past, and multimodal discourses that allow organizations to claim legitimate yet distinctive identities, at least with visual and verbal discourses. Sometimes work being done under the banner of multimodality spans into experimental research like that finding that the judgment of investors can be highly influenced by visual information, despite those individuals being relatively unaware of how much visual factors are influencing their decisions, an area that suggests more research needs to be done on the power of memes and disinformation in visual modes driving social movements in social media.
One interesting point seen in this growing research area is that some researchers take the stance that multimodal research is not just going beyond a focus on text as data, but argues that to truly be multimodal, the research requires more than one modality. That is, engaging "with several modes of communication (e.g. visual and verbal, or visual and material)". This seems to be a further development from researchers who align themselves with the multimodal label but then focus on a single modality such as images, for example, showing the interest in modalities beyond just textual data. Another interesting point for future research can be seen in contrasts, for example between multimodal and specifically "cross-modal" patterns.
See also
References
Mass media
Communication theory
Semiotics
Composition (language)
Writing | 0.768424 | 0.989434 | 0.760305 |
Holism in science | Holism in science, holistic science, or methodological holism is an approach to research that emphasizes the study of complex systems. Systems are approached as coherent wholes whose component parts are best understood in context and in relation to both each other and to the whole. Holism typically stands in contrast with reductionism, which describes systems by dividing them into smaller components in order to understand them through their elemental properties.
The holism-individualism dichotomy is especially evident in conflicting interpretations of experimental findings across the social sciences, and reflects whether behavioural analysis begins at the systemic, macro-level (ie. derived from social relations) or the component micro-level (ie. derived from individual agents).
Overview
David Deutsch calls holism anti-reductionist and refers to the concept of thinking as the only legitimate way to think about science in as a series of emergent, or higher level phenomena. He argues that neither approach is purely correct.
Two aspects of Holism are:
The way of doing science, sometimes called "whole to parts", which focuses on observation of the specimen within its ecosystem first before breaking down to study any part of the specimen.
The idea that the scientist is not a passive observer of an external universe but rather a participant in the system.
Proponents claim that Holistic science is naturally suited to subjects such as ecology, biology, physics and the social sciences, where complex, non-linear interactions are the norm. These are systems where emergent properties arise at the level of the whole that cannot be predicted by focusing on the parts alone, which may make mainstream, reductionist science ill-equipped to provide understanding beyond a certain level. This principle of emergence in complex systems is often captured in the phrase ′the whole is greater than the sum of its parts′. Living organisms are an example: no knowledge of all the chemical and physical properties of matter can explain or predict the functioning of living organisms. The same happens in complex social human systems, where detailed understanding of individual behaviour cannot predict the behaviour of the group, which emerges at the level of the collective. The phenomenon of emergence may impose a theoretical limit on knowledge available through reductionist methodology, arguably making complex systems natural subjects for holistic approaches.
Science journalist John Horgan has expressed this view in the book The End of Science. He wrote that a certain pervasive model within holistic science, self-organized criticality, for example, "is not really a theory at all. Like punctuated equilibrium, self-organized criticality is merely a description, one of many, of the random fluctuations, the noise, permeating nature." By the theorists' own admissions, he said, such a model "can generate neither specific predictions about nature nor meaningful insights. What good is it, then?"
One of the reasons that holistic science attracts supporters is that it seems to offer a progressive, 'socio-ecological' view of the world, but Alan Marshall's book The Unity of Nature offers evidence to the contrary; suggesting holism in science is not 'ecological' or 'socially-responsive' at all, but regressive and repressive.
Examples in various fields of science
Physical science
Agriculture
Permaculture takes a systems level approach to agriculture and land management by attempting to copy what happens in the natural world. Holistic management integrates ecology and social sciences with food production. It was originally designed as a way to reverse desertification. Organic farming is sometimes considered a holistic approach.
In physics
Richard Healey offered a modal interpretation and used it to present a model account of the puzzling correlations which portrays them as resulting from the operation of a process that violates both spatial and spatiotemporal separability. He argued that, on this interpretation, the nonseparability of the process is a consequence of physical property holism; and that the resulting account yields genuine understanding of how the correlations come about without any violation of relativity theory or Local Action. Subsequent work by Clifton, Dickson and Myrvold cast doubt on whether the account can be squared with relativity theory’s requirement of Lorentz invariance but leaves no doubt of an spatially entangled holism in the theory. Paul Davies and John Gribbin further observe that Wheeler's delayed choice experiment shows how the quantum world displays a sort of holism in time as well as space.
In the holistic approach of David Bohm, any collection of quantum objects constitutes an indivisible whole within an implicate and explicate order. Bohm said there is no scientific evidence to support the dominant view that the universe consists of a huge, finite number of minute particles, and offered instead a view of undivided wholeness: "ultimately, the entire universe (with all its 'particles', including those constituting human beings, their laboratories, observing instruments, etc.) has to be understood as a single undivided whole, in which analysis into separately and independently existent parts has no fundamental status".
Chaos and complexity
Scientific holism holds that the behavior of a system cannot be perfectly predicted, no matter how much data is available. Natural systems can produce surprisingly unexpected behavior, and it is suspected that behavior of such systems might be computationally irreducible, which means it would not be possible to even approximate the system state without a full simulation of all the events occurring in the system. Key properties of the higher level behavior of certain classes of systems may be mediated by rare "surprises" in the behavior of their elements due to the principle of interconnectivity, thus evading predictions except by brute force simulation.
Ecology
Holistic thinking can be applied to ecology, combining biological, chemical, physical, economic, ethical, and political insights. The complexity grows with the area, so that it is necessary to reduce the characteristic of the view in other ways, for example to a specific time of duration.
Medicine
In primary care the term "holistic," has been used to describe approaches that take into account social considerations and other intuitive judgements. The term holism, and so-called approaches, appear in psychosomatic medicine in the 1970s, when they were considered one possible way to conceptualize psychosomatic phenomena. Instead of charting one-way causal links from psyche to soma, or vice versa, it aimed at a systemic model, where multiple biological, psychological and social factors were seen as interlinked.
Other, alternative approaches in the 1970s were psychosomatic and somatopsychic approaches, which concentrated on causal links only from psyche to soma, or from soma to psyche, respectively. At present it is commonplace in psychosomatic medicine to state that psyche and soma cannot really be separated for practical or theoretical purposes.
The term systems medicine first appeared in 1992 and takes an integrative approach to all of the body and environment.
Social science
Economics
Some economists use a causal holism theory in their work. That is they view the discipline in the manner of Ludwig Wittgenstein and claim that it can't be defined by necessary and sufficient conditions.
Education reform
The Taxonomy of Educational Objectives identifies many levels of cognitive functioning, which it is claimed may be used to create a more holistic education. In authentic assessment, rather than using computers to score multiple choice tests, a standards based assessment uses trained scorers to score open-response items using holistic scoring methods. In projects such as the North Carolina Writing Project, scorers are instructed not to count errors, or count numbers of points or supporting statements. The scorer is instead instructed to judge holistically whether "as a whole" is it more a "2" or a "3". Critics question whether such a process can be as objective as computer scoring, and the degree to which such scoring methods can result in different scores from different scorers.
Anthropology
Anthropology is holistic in two senses. First, it is concerned with all human beings across times and places, and with all dimensions of humanity (evolutionary, biophysical, sociopolitical, economic, cultural, psychological, etc.) Further, many academic programs following this approach take a "four-field" approach to anthropology that encompasses physical anthropology, archeology, linguistics, and cultural anthropology or social anthropology.
Some anthropologists disagree, and consider holism to be an artifact from 19th century social evolutionary thought that inappropriately imposes scientific positivism upon cultural anthropology.
The term "holism" is additionally used within social and cultural anthropology to refer to a methodological analysis of a society as a whole, in which component parts are treated as functionally relative to each other. One definition says: "as a methodological ideal, holism implies ... that one does not permit oneself to believe that our own established institutional boundaries (e.g. between politics, sexuality, religion, economics) necessarily may be found also in foreign societies."
Psychology of perception
A major holist movement in the early twentieth century was gestalt psychology. The claim was that perception is not an aggregation of atomic sense data but a field, in which there is a figure and a ground. Background has holistic effects on the perceived figure. Gestalt psychologists included Wolfgang Koehler, Max Wertheimer, Kurt Koffka. Koehler claimed the perceptual fields corresponded to electrical fields in the brain. Karl Lashley did experiments with gold foil pieces inserted in monkey brains purporting to show that such fields did not exist. However, many of the perceptual illusions and visual phenomena exhibited by the gestaltists were taken over (often without credit) by later perceptual psychologists. Gestalt psychology had influence on Fritz Perls' gestalt therapy, although some old-line gestaltists opposed the association with counter-cultural and New Age trends later associated with gestalt therapy. Gestalt theory was also influential on phenomenology. Aron Gurwitsch wrote on the role of the field of consciousness in gestalt theory in relation to phenomenology. Maurice Merleau-Ponty made much use of holistic psychologists such as work of Kurt Goldstein in his "Phenomenology of Perception."
Teleological psychology
Alfred Adler believed that the individual (an integrated whole expressed through a self-consistent unity of thinking, feeling, and action, moving toward an unconscious, fictional final goal), must be understood within the larger wholes of society, from the groups to which he belongs (starting with his face-to-face relationships), to the larger whole of mankind. The recognition of our social embeddedness and the need for developing an interest in the welfare of others, as well as a respect for nature, is at the heart of Adler's philosophy of living and principles of psychotherapy.
Edgar Morin, the French philosopher and sociologist, can be considered a holist based on the transdisciplinary nature of his work.
Skeptical reception
According to skeptics, the phrase "holistic science" is often misused by pseudosciences. In the book Science and Pseudoscience in Clinical Psychology it's noted that "Proponents of pseudoscientific claims, especially in organic medicine, and mental health, often resort to the "mantra of holism" to explain away negative findings. When invoking the mantra, they typically maintain that scientific claims can be evaluated only within the context of broader claims and therefore cannot be evaluated in isolation." This is an invocation of Karl Popper's demarcation problem and in a posting to Ask a Philosopher Massimo Pigliucci clarifies Popper by positing, "Instead of thinking of science as making progress by inductive generalization (which doesn’t work because no matter how many times a given theory may have been confirmed thus far, it is always possible that new, contrary, data will emerge tomorrow), we should say that science makes progress by conclusively disconfirming theories that are, in fact, wrong."
Victor J. Stenger states that "holistic healing is associated with the rejection of classical, Newtonian physics. Yet, holistic healing retains many ideas from eighteenth and nineteenth century physics. Its proponents are blissfully unaware that these ideas, especially superluminal holism, have been rejected by modern physics as well".
Some quantum mystics interpret the wave function of quantum mechanics as a vibration in a holistic ether that pervades the universe and wave function collapse as the result of some cosmic consciousness. This is a misinterpretation of the effects of quantum entanglement as a violation of relativistic causality and quantum field theory.
See also
Antireductionism
Emergence
Holarchy
Holism
Holism in ecological anthropology
Holistic management
Holistic health
Holon (philosophy)
Interdisciplinarity
Organicism
Scientific reductionism
Systems thinking
References
Further reading
Article "Patterns of Wholeness: Introducing Holistic Science" by Brian Goodwin, from the journal Resurgence
Article "From Control to Participation" by Brian Goodwin, from the journal Resurgence
Complex systems theory
Holism
Systems theory | 0.777044 | 0.978448 | 0.760297 |
Ecological systems theory | Ecological systems theory is a broad term used to capture the theoretical contributions of developmental psychologist Urie Bronfenbrenner. Bronfenbrenner developed the foundations of the theory throughout his career, published a major statement of the theory in American Psychologist, articulated it in a series of propositions and hypotheses in his most cited book, The Ecology of Human Development and further developing it in The Bioecological Model of Human Development and later writings. A primary contribution of ecological systems theory was to systemically examine contextual variability in development processes. As the theory evolved, it placed increasing emphasis on the role of the developing person as an active agent in development and on understanding developmental process rather than "social addresses" (e.g., gender, ethnicity) as explanatory mechanisms.
Overview
Ecological systems theory describes a scientific approach to studying lifespan development that emphasizes the interrelationship of different developmental processes (e.g., cognitive, social, biological). It is characterized by its emphasis on naturalistic and quasi-experimental studies, although several important studies using this framework use experimental methodology. Although developmental processes are thought to be universal, they are thought to (a) show contextual variability in their likelihood of occurring, (b) occur in different constellations in different settings and (c) affect different people differently. Because of this variability, scientists working within this framework use individual and contextual variability to provide insight into these universal processes.
The foundations of ecological systems theory can be seen throughout Bronfennbrenner's career. For example, in the 1950s he analyzed historical and social class variations in parenting practices, in the 1960s he wrote an analysis of gender differences focusing on the different cultural meanings of the same parenting practices for boys and girls, and in the 1970s he compared childrearing in the US and USSR, focusing how cultural differences in the concordance of values across social institutions change parent influences.
The formal development of ecological systems theory occurred in three major stages. A major statement of the theory was published in American Psychologist. Bronfenbrenner critiqued then current methods of studying children in laboratories as providing a limited window on development, calling it "the science of the strange behavior of children in strange situations with strange adults for the briefest possible periods of time" (p. 513) and calling for more "ecologically valid" studies of developing individuals in their natural environment. For example, he argued that laboratory studies of children provided insight into their behavior in an unfamiliar ("strange") setting that had limited generalizability to their behavior in more familiar environments, such as home or school. The Ecology of Human Development articulated a series of definitions, propositions and hypotheses that could be used to study human development. This work categorized developmental processes, beginning with genetic and personal characteristics, though proximal influences that the developing person interacted with directly (e.g., social relationships), to influences such as parents' work, government policies or cultural value systems that affected them indirectly. As the theory evolved, it placed increasing emphasis on the role of the developing person as an active agent in development and on understanding developmental process rather than "social addresses" (e.g., gender, ethnicity) as explanatory mechanisms. The final form of the theory, developed in conjunction with Stephen Ceci, was called the Bioecological Model of Human Development and addresses critiques that previous statements of the theory under-emphasized individual difference and efficacy. Developmental processes were conceived of as co-occurring in niches that were lawfully defined and reinforcing. Because of this, Bronfenbrenner was a strong proponent of using social policy interventions as both a way of using science to improve child well-being and as an important scientific tool. Early examples of the application of ecological systems theory are evident in Head Start.
The five systems
Microsystem: Refers to the institutions and groups that most immediately and directly impact the child's development including: family, school, siblings, neighborhood, and peers.
Mesosystem: Consists of interconnections between the microsystems, for example between the family and teachers or between the child's peers and the family.
Exosystem: Involves links between social settings that do not involve the child. For example, a child's experience at home may be influenced by their parent's experiences at work. A parent might receive a promotion that requires more travel, which in turn increases conflict with the other parent resulting in changes in their patterns of interaction with the child.
Macrosystem: Describes the overarching culture that influences the developing child, as well as the microsystems and mesosystems embedded in those cultures. Cultural contexts can differ based on geographic location, socioeconomic status, poverty, and ethnicity. Members of a cultural group often share a common identity, heritage, and values. Macrosystems evolve across time and from generation to generation.
Chronosystem: Consists of the pattern of environmental events and transitions over the life course, as well as changing socio-historical circumstances. For example, researchers have found that the negative effects of divorce on children often peak in the first year after the divorce. By two years after the divorce, family interaction is less chaotic and more stable. An example of changing sociohistorical circumstances is the increase in opportunities for women to pursue a career during the last thirty years.
Later work by Bronfenbrenner considered the role of biology in this model as well; thus the theory has sometimes been called the bioecological model.
Per this theoretical construction, each system contains roles, norms and rules which may shape psychological development. For example, an inner-city family faces many challenges which an affluent family in a gated community does not, and vice versa. The inner-city family is more likely to experience environmental hardships, like crime and squalor. On the other hand, the sheltered family is more likely to lack the nurturing support of extended family.
Since its publication in 1979, Bronfenbrenner's major statement of this theory, The Ecology of Human Development has had widespread influence on the way psychologists and others approach the study of human beings and their environments. As a result of his groundbreaking work in human ecology, these environments—from the family to economic and political structures—have come to be viewed as part of the life course from childhood through adulthood.
Bronfenbrenner has identified Soviet developmental psychologist Lev Vygotsky and German-born psychologist Kurt Lewin as important influences on his theory.
Bronfenbrenner's work provides one of the foundational elements of the ecological counseling perspective, as espoused by Robert K. Conyne, Ellen Cook, and the University of Cincinnati Counseling Program.
There are many different theories related to human development. Human ecology theory emphasizes environmental factors as central to development.
See also
Bioecological model
Ecosystem
Ecosystem ecology
Systems ecology
Systems psychology
Theoretical ecology
Urie Bronfenbrenner
References
The diagram of the ecosystemic model was created by Buehler (2000) as part of a dissertation on assessing interactions between a child, their family, and the school and medical systems.
Further reading
Urie Bronfenbrenner. (2009). The Ecology of Human Development: Experiments by Nature and Design. Cambridge, Massachusetts: Harvard University Press.
Dede Paquette & John Ryan. (2001). Bronfenbrenner’s Ecological Systems Theory
Marlowe E. Trance, Kerstin O. Flores. (2014). " Child and Adolescent Development" Vol. 32. no. 5 9407
Ecological Systems Review
The ecological framework facilitates organizing information about people and their environment in
order to understand their interconnectedness. Individuals move through a series of life transitions,
all of which necessitate environmental support and coping skills. Social problems involving
health care, family relations, inadequate income, mental health difficulties, conflicts with law
enforcement agencies, unemployment, educational difficulties, and so on can all be subsumed
under the ecological model, which would enable practitioners to assess factors that are relevant
to such problems (Hepworth, Rooney, Rooney, Strom-Gottfried, & Larsen, 2010, p. 16). Thus,
examining the ecological contexts of parenting success of children with disabilities is particularly
important. Utilizing Bronfenbrenner's (1977, 1979) ecological framework, this article explores
parenting success factors at the micro- (i.e., parenting practice, parent-child relations), meso-
(i.e., caregivers' marital relations, religious social support), and macro-system levels (i.e., cultural
variations, racial and ethnic disparities, and health care delivery system) of practice.
Developmental psychology
Human ecology
Psychological schools
Psychological theories
Systems psychology
Systems theory | 0.764171 | 0.994903 | 0.760276 |
Universal Design for Learning | Universal Design for Learning (UDL) is an educational framework based on research in the learning theory, including cognitive neuroscience, that guides the development of flexible learning environments and learning spaces that can accommodate individual learning differences.
Universal Design for learning is a set of principles that provide teachers with a structure to develop instructions to meet the diverse needs of all learners.
The UDL framework, first defined by David H. Rose, Ed.D. of the Harvard Graduate School of Education and the Center for Applied Special Technology (CAST) in the 1990s, calls for creating a curriculum from the outset that provides:
Multiple means of representation give learners various ways of acquiring information and knowledge,
Multiple means of expression to provide learners alternatives for demonstrating what they know, and
Multiple means of engagement to tap into learners' interests, challenge them appropriately, and motivate them to learn.
Curriculum, as defined in the UDL literature, has four parts: instructional goals, methods, materials, and assessments. UDL is intended to increase access to learning by reducing physical, cognitive, intellectual, and organizational barriers to learning, as well as other obstacles. UDL principles also lend themselves to implementing inclusionary practices in the classroom.
Universal Design for Learning is referred to by name in American legislation, such as the Higher Education Opportunity Act (HEOA) of 2008 (Public Law 110-315), the 2004 reauthorization of the Individuals with Disabilities Education Act (IDEA), and the Assistive Technology Act of 1998. The emphasis is placed on equal access to curriculum by all students and the accountability required by IDEA 2004 and No Child Left Behind legislation has presented a need for a practice that will accommodate all learners.
Origins
The concept and language of Universal Design for Learning was inspired by the universal design movement in architecture and product development, originally formulated by Ronald L. Mace at North Carolina State University. Universal design calls for "the design of products and environments to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design". UDL applies this general idea to learning: that curriculum should, from the outset, be designed to accommodate all kinds of learners. Educators have to be deliberate in the teaching and learning process in the classroom (e.g.,Preparing class learning profiles for each student). This will enable grouping by interest. Those students that have challenges will be given special assistance. This will enable specific multimedia to meet the needs of all students.
However, recognizing that the UD principles created to guide the design of things (e.g., buildings, products) are not adequate for the design of social interactions (e.g., human learning environments), researchers at CAST looked to the neurosciences and theories of progressive education in developing the UDL principles. In particular, the work of Lev Vygotsky and, less directly, Benjamin Bloom informed the three-part UDL framework.
Some educational initiatives, such as Universal Design for Instruction (UDI) and Universal Instructional Design (UID), adapt the Mace principles for products and environments to learning environments, primarily at the postsecondary level. While these initiatives are similar to UDL, and have, in some cases, compatible goals, they are not equivalent to UDL and the terms are not interchangeable; they refer to distinct frameworks. On the other hand, UDI practices promoted by the DO-IT Center operationalize both UD and UDL principles to help educators maximize the learning of all students.
Implementation initiatives in the US
In 2006, representatives from more than two dozen educational and disability organizations in the US formed the National Universal Design for Learning Taskforce. The goal was to raise awareness of UDL among national, state, and local policymakers.
The organizations represented in the National Task Force on UDL include the National School Boards Association, the National Education Association (NEA), the American Federation of Teachers (AFT), the National Association of State Directors of Special Education (NASDSE), the Council of Chief State School Officers (CCSSO), the National Down Syndrome Society (NDSS), the Council for Learning Disabilities (CLD),the Council for Exceptional Children (CEC), the National Center for Learning Disabilities (NCLD), the National Association of Secondary School Principals (NASSP), Easter Seals, American Foundation for the Blind (AFB), Association on Higher Education and Disability, Higher Education Consortium for Special Education (HECSE), American Occupational Therapy Association, National Association of State Boards of Education (NASBE), National Down Syndrome Congress (NDSC), Learning Disabilities Association of America (LDA), TASH, the Arc of the United States, the Vocational Evaluation and Career Assessment Professionals Association (VECAP), the National Cerebral Palsy Association, and the Advocacy Institute.
Activities have included sponsoring a Congressional staff briefing on UDL in February 2007 and supporting efforts to include UDL in major education legislation for both K–12 and postsecondary.
Research
Research evidence on UDL is complicated as it is hard to isolate UDL from other pedagogical practices, for example, Coppola et al. (2019) combine UDL with Culturally Sustaining Pedagogy, and Phuong and Berkeley (2017) combine it with Adaptive Equity Oriented Pedagogy (AEP). Coppola et al. provide phenomenological evidence that learners with a variety of needs find UDL helpful for their learning. Phuong and Berkeley, using a randomized controlled trial, found that AEP, which is based on UDL, led to a significant improvement in students’ grades, even when several confounding variables were controlled for.
Baumann and Melle (2019) report in a small-scale study of 89 students, 73 without specific educational needs and 16 with specific educational needs, that the inclusion of UDL enhanced both students’ performance and their enjoyment of the learning experience.
Assistive Technology for UDL
Assistive technology (AT) is a pedagogical approach that can be used to enforce universal design for learning (UDL) in the inclusive classroom. AT and UDL can be theorized as two ends of a spectrum, where AT is on one end addressing personal or individual student needs, and UDL is on the other end concerned with classroom needs and curriculum design. Around the center of this spectrum, AT and UDL overlap such that student individual needs are addressed within the context of the larger curriculum, ideally without segregation or exclusion. UDL provides educators with the framework for an educational curriculum that addresses students' diverse learning styles and interests via AT.
According to the Technology-Related Assistance to Individuals with Disabilities Act of 1988 and the Individuals with Disabilities Education Act of 2004, AT includes AT devices and services. AT devices are physical hardware, equipment, or software used to improve a person's cognitive, emotional, and/or behavioral experience. These devices differ from medical ones which may be implanted surgically. AT services aid a person in choosing and/or using AT devices.
Types of Assistive Technology
Low-tech
Assistive technology devices can be characterized as low-tech, mid-tech, or high-tech. Low-tech devices are low in cost and students who use them do not usually need to participate in training. Low-tech devices include graphic organizers, visual aids, grid or stylized paper, and pencil grips, among others. Low-tech AT would be a first step in addressing a student's needs.
Mid-tech
Should students require additional support, educators can try implementing mid-tech devices, which do not necessarily require additional training and usually function with a power source, but are more affordable than their high-tech alternative. Mid-tech devices include audiobooks, simple-phrase communication software, predictive text software (ex: WordQ), and some tablets.
High-tech
High-tech devices are more complex types of AT. These devices are higher in cost and require extensive user training. Some examples of high-tech devices are text-to-speech and speech-to-text software, wheelchairs with alternative navigation software, and alternative mouse software. It is important to provide students and their families with low-cost recommendations for high-cost devices.
Implementation of Assistive Technology
The variety of assistive technology is what supports teachers in implementing universal design for learning (UDL) in their classrooms. The UDL framework promotes a flexible curriculum, which would be further supported by the implementation of various assistive technologies depending on the need of the student. For example, a student struggling in a language course might need digital AT to assist them in initiating or cueing the development of their ideas. However, from a UDL perspective, the teacher recognizes that the current version of the curriculum does not acknowledge forms of expression aside from manual writing. The teacher can adjust the curriculum to adapt to the needs of the students and implement AT to assist each individual student with their unique learning needs.
Research shows that the use of physical or virtual manipulatives improves academic performance in students, but it is difficult to compare results between classrooms since each classroom differs in how they implement assistive technology. Generally, teachers and other staff members need to consider the students' internal and external factors when implementing AT devices or services. Internal factors involve assessing the individual needs of the student, sometimes with neuropsychological testing by the school's professional staff, and deciding what type of AT addresses their need. External factors involve considering whether the classroom environment and the student's home environment can support the implementation of the AT including space requirements and training for teachers, students, and their families. More resources and attention need to be allocated towards teacher and staff training in using AT to support UDL practices in the classroom.
Notes
Pedagogy
Higher education | 0.769334 | 0.98822 | 0.760271 |
Mindset | A mindset refers to an established set of attitudes of a person or group concerning culture, values, philosophy, frame of reference, outlook, or disposition. It may also arise from a person's worldview or beliefs about the meaning of life.
Some scholars claim that people can have multiple types of mindsets. Some of these types include a growth mindset, fixed mindset, poverty mindset, abundance mindset, and positive mindset among others that make up a person's overall mindset.
More broadly, scholars have found that mindset is associated with a range of functional effects in different areas of people's lives. This includes influencing a person's capacity for perception by functioning like a filter, a frame of reference, a meaning-making system, and a pattern of perception. Mindset is described as shaping a person's capacity for development by being associated with passive or conditional learning, incremental or horizontal learning, and transformative or vertical learning. Mindset is also believed to influence a person's behavior, having deliberative or implemental action phases, as well as being associated with technical or adaptive approaches to leadership.
A mindset could create an incentive to adopt (or accept) previous behaviors, choices, or tools, sometimes known as cognitive inertia or groupthink. When a prevailing mindset is limiting or inappropriate, it may be difficult to counteract the grip of mindset on analysis and decision-making.
In cognitive psychology, a mindset is the cognitive process activated in a task. In addition to the field of cognitive psychology, the study of mindset is evident in the social sciences and other fields (such as positive psychology). Characteristic of this area of study is its fragmentation among academic disciplines.
History
Numerous scholars have identified mindset history as being a critical gap in contemporary literature and also in current approaches to mindset education and training.
The first dedicated review of mindset history found that mindset psychology has a century-long history of explicit research and practice, with its origin phase taking place between 1908 and 1939, early inquiries occurring between 1940 and 1987, and contemporary bodies of work emerging in and beyond 1988. This review also identified some of the traditions of research and practice that are closely related to the origins and history of mindset psychology, some of which span back hundreds and thousands of years. Then, there are the lineages of research and practice that did not explicitly use the term mindset, but which bear some resemblance to it and are in some way related to this history. Peter Gollwitzer conducted explorations of mindset since the 1990s. Gollwitzer's contributions include his theory of mindset and the mindset theory of action phases.
Politics
A political example is the "Cold War mindset" in the U.S. and the USSR, which included belief in game theory, in a chain of command in control of nuclear materials, and in the mutual assured destruction of both in a nuclear war. This mindset prevented an attack by either country, but deterrence theory has made assessments of the Cold War mindset a subject of controversy.
Modern military theory attempts to challenge entrenched mindsets in asymmetric warfare, terrorism, and the proliferation of weapons of mass destruction. These threats are "a revolution in military affairs", requiring rapid adaptation to new threats and circumstances.
Systems theory
Building on Magoroh Maruyama's concept of mindscape, mindset includes a cultural and social orientation: hierarchical and egalitarian individualism, hierarchical and egalitarian collectivism, hierarchic and egalitarian synergism, and hierarchical and egalitarian populism.
Collective mindset
Collective mindsets are described in Edwin Hutchins's Cognition in the Wild (1995) and Maximilian Senges' Knowledge Entrepreneurship in Universities (2007). Hutchins analyzed a team of naval navigators as a cognitive unit or computational system, and Senges explained how a collective mindset is part of university strategy and practice.
Parallels exist in collective intelligence and the wisdom of the crowd. Zara said that since collective reflection is more explicit, discursive, and conversational, it needs a good Gestell.
Erik H. Erikson's analysis of group-identities and what he calls a "life-plan" is relevant to a collective mindset. Erikson cites Native Americans who were meant to undergo a reeducation process to instill a modern "life-plan" which advocated housing and wealth; the natives' collective historic identity as buffalo hunters was oriented around such fundamentally different motivations that communication about life plans was difficult.
An institution is related to collective mindset; an entrepreneurial mindset refers to a person who "values uncertainty in the marketplace and seeks to continuously identify opportunities with the potential to lead to important innovations". An institution with an entrepreneurial philosophy will have entrepreneurial goals and strategies. It fosters an entrepreneurial milieu, allowing each entity to pursue emerging opportunities. A collective mindset fosters values which lead to a particular practice. Hitt cites the five dimensions of an entrepreneurial mindset as "autonomy, innovativeness, risk taking, proactiveness, and competitive aggressiveness".
Theories
The study of mindsets includes definition, measurement, and categorization. Scholars in the same discipline differ.
Mindset agency
Sagiv and Schwarts defined cultural values to explain the nature, functions, and variables which characterize mindset agency. They posited three bipolar dimensions of culture, based on values: cognitive (embedded or autonomous), figurative (mastery or harmony), and operative (hierarchical or egalitarian).
Mindscape theory
The Myers–Briggs Type Indicator (MBTI) measures psychological functions which, paired with social attitudes, combine to generate personality types that may be evaluated by exploring individual preferences. Maruyama's mindscape theory measures individuals on a scale of characteristics and places them into one of four personality categories.
Fixed and growth mindsets
According to Carol Dweck, individuals can be placed on a continuum according to their views of where ability originate, from a fixed to a growth mindset. An individual's mindset affects the "motivation to practice and learn".
People with a fixed mindset believe that "intelligence is static", and little can be done to improve ability. Feedback is seen as "evaluation of their underlying ability" and success is seen as a result of this ability, not any effort expended. Failure is intimidating, since it "suggests constraints or limits they would not be able to overcome". Those with a fixed mindset tend to avoid challenges, give up easily, and focus on the outcome. They believe that their abilities are fixed, and effort has little value.
Those with a growth mindset believe that "intelligence can be developed", and their abilities can be increased by learning. They tend to embrace challenges, persevere in the face of adversity, accept and learn from failure, focus on process rather than outcome, and see abilities as skills which are developed through effort. Feedback and failure are seen as opportunities to increase ability, signaling the "need to pay attention, invest effort, apply time to practice, and master the new learning opportunity".
Grit, a personality trait combining determination and perseverance, is related to a growth mindset. Keown and Bourke discussed the importance of a growth mindset and grit. Their 2019 study found that people with lower economic status had a greater chance of success if they had a growth mindset and were willing to work through tribulation.
Much of Dweck's research was related to the effect of a student's mindset on classroom performance. For students to develop a growth mindset, a nurturing classroom culture must be established with appropriate praise and encouragement. According to Dweck, "Praising students for the process they have engaged in—the effort they applied, the strategies they used, the choices they made, the persistence they displayed, and so on—yields more long-term benefits than telling them they are 'smart' when they succeed". Teachers need to design meaningful learning activities for their students: "The teacher should portray challenges as fun and exciting, while portraying easy tasks as boring and less useful for the brain".
A second strategy to promote a growth mindset in the classroom is more explicit, establishing personal goals, and having students "write about and share with one another something they used to be poor at and now are very good at." Hinda Hussein studied the positive effect of reflective journal writing on students' growth mindset; journaling can improve a student's conceptual knowledge and enhance the understanding of their thoughts. Dweck has identified the word "yet" as a valuable tool to assess learning. If a teacher hears students saying that they are not good at something or cannot do something, they should interject "not yet" to reinforce the idea that ability and motivation are fluid.
Dweck and Jo Boaler indicate a fixed mindset can lead to sex differences in education, which can partially explain low achievement and participation by minority and female students. Boaler builds on Dweck's research to show that "gender differences in mathematics performance only existed among fixed mindset students". Boaler and Dweck say that people with growth mindsets can gain knowledge. Boaler said, "The key growth mindset message was that effort changes the brain by forming new connections, and that students control this process. The growth mindset intervention halted the students' decline in grades and started the students on a new pathway of improvement and high achievement".
L. S. Blackwell presented research in 2015 exploring whether growth mindsets can be promoted in minority groups. Blackwell builds on Dweck's research, observing minority groups and finding that "students with a growth mindset had stronger learning goals than the fixed mindset students." These students "had much more positive attitudes toward effort, agreeing that 'when something is hard, it just makes me want to work more on it, not less. Students with a fixed mindset were more likely to say that "if you're not good at a subject, working hard won't make you good at it” and “when I work hard at something, it makes me feel like I'm not very smart".
Dweck's research on growth and fixed mindsets is useful in intervening with at-risk students, dispelling negative stereotypes in education held by teachers and students, understanding the impacts of self-theories on resilience, and understanding how praise can foster a growth mindset and positively impact student motivation. There has also been movement towards the application of Dweck's mindset research in non-academic environments, such as the workplace. Other scholars have conducted research building on her findings. A 2018 study by Rhew et al. suggested that a growth-mindset intervention can increase the motivation of adolescent special-education participants. A 2019 study by Wang et al. suggested that substance use has adverse effects on adolescent reasoning. Developing a growth mindset in these adolescents was shown to reduce this adverse effect. These studies illustrate how educators can intervene, encouraging a growth mindset, by allowing students to see that their behavior can be changed with effort. Criticism has been directed at "growth mindset" and related research, however. Moreau et al. (2019) suggest "that overemphasizing the malleability of abilities and other traits can have negative consequences for individuals, science, and society."
Follow up research after the release of her book has led Dweck to be quoted as saying "Nobody has a growth mindset in everything all the time." along with the acknowledgement of the reality of the false growth mindset, and the truer growth mindset. One of Dweck's concerns being that educators were giving praise based on effort alone, when the results gained did not she believe merit praise. Researchers noted adults within a study "who agree with growth mindset, but do not behave as though they believe ability can change" as holding a false growth mindset.
Students and teachers
Elements of personality (such as sensitivity to mistakes and setbacks) may predispose toward a particular mindset, which can be developed and reshaped through interactions. In a number of studies, Dweck and her colleagues noted that alterations in mindset could be achieved through "praising the process through which success was achieved", "having [college aged students] read compelling scientific articles that support one view or the other", or teaching junior-high-school students "that every time they try hard and learn something new, their brain forms new connections that, over time, make them smarter."
Much research in education focuses on a student's ability to adopt a growth mindset, and less attention is paid to teachers' mindsets and their influence on students. Hattie writes, "Differing mindsets, or assumptions, that teachers possess about themselves and their students play a significant role in determining their expectations, teaching practices, and how students perceive their own mindset."
A study by Patrick and Joshi explored how teachers explain growth and fixed mindsets, with two major findings in 150 semi-structured interviews. First, they found that teachers' prior beliefs about learning and students influenced how they engaged with their mindsets. Second, they found that many teachers oversimplified growth and fixed mindsets as positive and negative traits.
A study conducted by Fiona S. Seaton (2018) examined the impact of teacher training to influence mindset. The teachers in this study had six training sessions, and Seaton found that the sessions had an impact on their mindsets which was sustained three months afterward. The results of this study suggest that adult mindsets are malleable, and can shift with appropriate supports.
Benefit mindset
In 2015, Ashley Buchanan and Margaret L. Kern proposed a benefit mindset: an evolution of the fixed and growth mindsets. The benefit mindset describes society's leaders, who promote individual and collective well-being: people who discover their strengths to contribute to causes greater than the self. They question why they do what they do, positioning their actions within a purposeful context.
Global mindset
Originating from the study of organizational leadership and coinciding with the growth of multinational corporations during the 1980s, organizations observed that executive effectiveness did not necessarily translate cross-culturally. A global mindset emerged as an explanation. Cross-cultural leaders were hypothesized to need an additional skill, ability, or proficiency (a global mindset) to be effective regardless of culture or context. Cultural agility refers to such a need. A defining characteristic of the study of global mindset is the variety with which scholars define it, but they typically agree that global mindset and its development increase global effectiveness for individuals and organizations.
Abundance and scarcity
People with an abundance mindset believe that there are enough resources for everyone, and see the glass as half-full; those with a scarcity mindset believe that there is a limited number of resources, and see the glass as half-empty. Mehta and Zhu found that an "abundance mindset makes people think beyond established functionalities to explore broadly for solutions, thereby heightening creativity. In contrast, a scarcity mindset induces functional fixedness, thereby reducing creativity."
Productive and defensive mindsets
According to Chris Argyris, organizations have two dominant mindsets: productive and defensive. The productive mindset is hinged in logic, focused on knowledge and its certifiable resultsa decision-making mindset which is transparent and auditable.
The defensive mindset is closed, self-protective and self-deceptive. It does not see the greater good, but centers on individual defense; truth, if perceived as harmful to the person concerned, would be denied. This may allow personal growth, but no organizational growth or development.
Deliberative and implemental mindsets
The deliberative and implemental mindsets are part of the decision making process in goal setting and goal striving. When someone has a deliberative mindset, they are considering a variety of actions and have not yet settled on what they are going to do. This person will tend to be open to alternative options when presented and will explore ideas until they have decided upon a course of action. This mindset is connected to the idea of goal setting.
After someone narrows down their options and makes a commitment to follow a particular path, they will have an implemental mindset. People with an implemental mindset are less open to alternative courses of action because they have already decided what they are going to do and now focus more energy on goal striving, rather than goal setting.
The deliberative mindset has been recognized as important for coming to conclusions in order to make a well-planned goal, but it has negative consequences for goal striving once a goal is already in place. On the other hand, the implemental mindset helps people to focus their behavior in a particular direction; this can be detrimental for someone who has not spent sufficient time with a deliberative mindset.
Promotion and prevention mindsets
The promotion and prevention mindsets are motivational orientations that are focused on the outcomes or consequences of behavior. People with a promotion mindset focus on achievement and accomplishment. Those with a prevention mindset pay closer attention to avoiding negative outcomes. They act more out of a sense of obligation and the fulfillment of duty than to seek any sort of reward. Both of these mindsets can be caused or influenced by individual disposition or by environmental stimuli. Those who are dispositionally in a promotion mindset seek to make good things happen, and situations that encourage a promotion mindset are those in which there is a promise of gain. Those with a dispositional prevention mindset believe that they need to keep bad things from happening, and situations conducive to the prevention mindset are those in which the idea of duty is emphasized.
Those with a promotion mindset are characterized as being eager and quick to act. They take initiative and move to cause improvements towards their ideal state. People with a prevention mindset are characterized as being cautious and careful, avoiding risks and any course of action that could potentially cause failure in reaching a goal.
Criticism
In 2019 a larger randomized controlled trial by the Education Endowment Foundation for growth mindset training showed no significant increase in numeracy or literacy. A 2024 study showed that growth mindset scales by Carol Dweck have psychometric comparability, however this study showed no connection between growth mindset and goal achievement.
See also
Dual mentality
Bounded rationality
Elitism
Ethical egoism
Game theory
Good and evil
Property dualism
Rational irrationality
Notes
Cognitive biases | 0.763866 | 0.995278 | 0.760259 |
Acculturation | Acculturation is a process of social, psychological, and cultural change that stems from the balancing of two cultures while adapting to the prevailing culture of the society. Acculturation is a process in which an individual adopts, acquires and adjusts to a new cultural environment as a result of being placed into a new culture, or when another culture is brought to someone. Individuals of a differing culture try to incorporate themselves into the new more prevalent culture by participating in aspects of the more prevalent culture, such as their traditions, but still hold onto their original cultural values and traditions. The effects of acculturation can be seen at multiple levels in both the devotee of the prevailing culture and those who are assimilating into the culture.
At this group level, acculturation often results in changes to culture, religious practices, health care, and other social institutions. There are also significant ramifications on the food, clothing, and language of those becoming introduced to the overarching culture.
At the individual level, the process of acculturation refers to the socialization process by which foreign-born individuals blend the values, customs, norms, cultural attitudes, and behaviors of the overarching host culture. This process has been linked to changes in daily behaviour, as well as numerous changes in psychological and physical well-being. As enculturation is used to describe the process of first-culture learning, acculturation can be thought of as second-culture learning.
Under normal circumstances that are seen commonly in today's society, the process of acculturation normally occurs over a large span of time throughout a few generations. Physical force can be seen in some instances of acculturation, which can cause it to occur more rapidly, but it is not a main component of the process. More commonly, the process occurs through social pressure or constant exposure to the more prevalent host culture.
Scholars in different disciplines have developed more than 100 different theories of acculturation, but the concept of acculturation has only been studied scientifically since 1918. As it has been approached at different times from the fields of psychology, anthropology, and sociology, numerous theories and definitions have emerged to describe elements of the acculturative process. Despite definitions and evidence that acculturation entails a two-way process of change, research and theory have primarily focused on the adjustments and adaptations made by minorities such as immigrants, refugees, and indigenous people in response to their contact with the dominant majority. Contemporary research has primarily focused on different strategies of acculturation, how variations in acculturation affect individuals, and interventions to make this process easier.
Historical approaches
The history of Western civilization, and in particular the histories of Europe and the United States, are largely defined by patterns of acculturation.
One of the most notable forms of acculturation is imperialism, the most common progenitor of direct cultural change. Although these cultural changes may seem simple, the combined results are both robust and complex, impacting both groups and individuals from the original culture and the host culture. Anthropologists, historians, and sociologists have studied acculturation with dominance almost exclusively, primarily in the context of colonialism, as a result of the expansion of western European peoples throughout the world during the past five centuries.
The first psychological theory of acculturation was proposed in W.I. Thomas and Florian Znaniecki's 1918 study, The Polish Peasant in Europe and America. From studying Polish immigrants in Chicago, they illustrated three forms of acculturation corresponding to three personality types: Bohemian (adopting the host culture and abandoning their culture of origin), Philistine (failing to adopt the host culture but preserving their culture of origin), and creative-type (able to adapt to the host culture while preserving their culture of origin). In 1936, Redfield, Linton, and Herskovits provided the first widely used definition of acculturation as:
Long before efforts toward racial and cultural integration in the United States arose, the common process was assimilation. In 1964, Milton Gordon's book Assimilation in American Life outlined seven stages of the assimilative process, setting the stage for literature on this topic. Later, Young Yun Kim authored a reiteration of Gordon's work, but argued cross-cultural adaptation as a multi-staged process. Kim's theory focused on the unitary nature of psychological and social processes and the reciprocal functional personal environment interdependence. Although this view was the earliest to fuse micro-psychological and macro-social factors into an integrated theory, it is clearly focused on assimilation rather than racial or ethnic integration. In Kim's approach, assimilation is unilinear and the sojourner must conform to the majority group culture in order to be "communicatively competent." According to Gudykunst and Kim (2003) the "cross-cultural adaptation process involves a continuous interplay of deculturation and acculturation that brings about change in strangers in the direction of assimilation, the highest degree of adaptation theoretically conceivable." This view has been heavily criticized, since the biological science definition of adaptation refers to the random mutation of new forms of life, not the convergence of a monoculture (Kramer, 2003).
In contradistinction from Gudykunst and Kim's version of adaptive evolution, Eric M. Kramer developed his theory of Cultural Fusion (2011, 2010, 2000a, 1997a, 2000a, 2011, 2012) maintaining clear, conceptual distinctions between assimilation, adaptation, and integration. According to Kramer, assimilation involves conformity to a pre-existing form. Kramer's (2000a, 2000b, 2000c, 2003, 2009, 2011) theory of Cultural Fusion, which is based on systems theory and hermeneutics, argues that it is impossible for a person to unlearn themselves and that by definition, "growth" is not a zero-sum process that requires the disillusion of one form for another to come into being but rather a process of learning new languages and cultural repertoires (ways of thinking, cooking, playing, working, worshiping, and so forth). In other words, Kramer argues that one need not unlearn a language to learn a new one, nor does one have to unlearn who one is to learn new ways of dancing, cooking, talking, and so forth. Unlike Gudykunst and Kim (2003), Kramer argues that this blending of language and culture results in cognitive complexity, or the ability to switch between cultural repertoires. To put Kramer's ideas simply, learning is growth rather than unlearning.
Conceptual models
Theory of Dimensional Accrual and Dissociation
Although numerous models of acculturation exist, the most complete models take into consideration the changes occurring at the group and individual levels of both interacting groups. To understand acculturation at the group level, one must first look at the nature of both cultures before coming into contact with one another. A useful approach is Eric Kramer's theory of Dimensional Accrual and Dissociation (DAD). Two fundamental premises in Kramer's DAD theory are the concepts of hermeneutics and semiotics, which infer that identity, meaning, communication, and learning all depend on differences or variance. According to this view, total assimilation would result in a monoculture void of personal identity, meaning, and communication. Kramer's DAD theory also utilizes concepts from several scholars, most notably Jean Gebser and Lewis Mumford, to synthesize explanations of widely observed cultural expressions and differences.
Kramer's theory identifies three communication styles (idolic, symbolic, or signalic ) in order to explain cultural differences. It is important to note that in this theory, no single mode of communication is inherently superior, and no final solution to intercultural conflict is suggested. Instead, Kramer puts forth three integrated theories: the theory Dimensional Accrual and Dissociation, the Cultural Fusion Theory and the Cultural Churning Theory.
For instance, according to Kramer's DAD theory, a statue of a god in an idolic community is god, and stealing it is a highly punishable offense. For example, many people in India believe that statues of the god Ganesh – to take such a statue/god from its temple is more than theft, it is blasphemy. Idolic reality involves strong emotional identification, where a holy relic does not simply symbolize the sacred, it is sacred. By contrast, a Christian crucifix follows a symbolic nature, where it represents a symbol of God. Lastly, the signalic modality is far less emotional and increasingly dissociated.
Kramer refers to changes in each culture due to acculturation as co-evolution. Kramer also addresses what he calls the qualities of out vectors which address the nature in which the former and new cultures make contact. Kramer uses the phrase "interaction potential" to refer to differences in individual or group acculturative processes. For example, the process of acculturation is markedly different if one is entering the host as an immigrant or as a refugee. Moreover, this idea encapsulates the importance of how receptive a host culture is to the newcomer, how easy is it for the newcomer to interact with and get to know the host, and how this interaction affects both the newcomer and the host.
Fourfold models
The fourfold model is a bilinear model that categorizes acculturation strategies along two dimensions. The first dimension concerns the retention or rejection of an individual's minority or native culture (i.e. "Is it considered to be of value to maintain one's identity and characteristics?"), whereas the second dimension concerns the adoption or rejection of the dominant group or host culture. ("Is it considered to be of value to maintain relationships with the larger society?") From this, four acculturation strategies emerge.
Assimilation occurs when individuals adopt the cultural norms of a dominant or host culture, over their original culture. Sometimes it is forced by governments.
Separation occurs when individuals reject the dominant or host culture in favor of preserving their culture of origin. Separation is often facilitated by immigration to ethnic enclaves.
Integration occurs when individuals can adopt the cultural norms of the dominant or host culture while maintaining their culture of origin. Integration leads to, and is often synonymous with biculturalism.
Marginalization occurs when individuals reject both their culture of origin and the dominant host culture.
Studies suggest that individuals' respective acculturation strategy can differ between their private and public life spheres. For instance, an individual may reject the values and norms of the dominant culture in their private life (separation), whereas they might adapt to the dominant culture in public parts of their life (i.e., integration or assimilation).
Predictors of acculturation strategies
The fourfold models used to describe individual attitudes of immigrants parallel models used to describe group expectations of the larger society and how groups should acculturate. In a melting pot society, in which a harmonious and homogenous culture is promoted, assimilation is the endorsed acculturation strategy. In segregationist societies, in which humans are separated into racial, ethnic and/or religious groups in daily life, a separation acculturation strategy is endorsed. In a multiculturalist society, in which multiple cultures are accepted and appreciated, individuals are encouraged to adopt an integrationist approach to acculturation. In societies where cultural exclusion is promoted, individuals often adopt marginalization strategies of acculturation.
Attitudes towards acculturation, and thus the range of acculturation strategies available, have not been consistent over time. For example, for most of American history, policies and attitudes have been based around established ethnic hierarchies with an expectation of one-way assimilation for predominantly White European immigrants. Although the notion of cultural pluralism has existed since the early 20th century, the recognition and promotion of multiculturalism did not become prominent in America until the 1980s. Separatism can still be seen today in autonomous religious communities such as the Amish and the Hutterites. Immediate environment also impacts the availability, advantage, and selection of different acculturation strategies. As individuals immigrate to unequal segments of society, immigrants to areas lower on economic and ethnic hierarchies may encounter limited social mobility and membership to a disadvantaged community. It can be explained by the theory of Segmented Assimilation, which is used to describe the situation when immigrants individuals or groups assimilate to the culture of different segments of the society of the host country. The outcome of whether entering the upper class, middle class, or lower class is largely determined by the socioeconomic status of the last generation.
On a broad scale study, involving immigrants in 13 immigration-receiving countries, the experience of discrimination was positively related to the maintenance of the immigrants' ethnic culture. In other words, immigrants that maintain their cultural practices and values are more likely to be discriminated against than those whom abandon their culture. Further research has also identified that the acculturation strategies and experiences of immigrants can be significantly influenced by the acculturation preferences of the members of the host society. The degree of intergroup and interethnic contact has also been shown to influence acculturation preferences between groups, support for multilingual and multicultural maintenance of minority groups, and openness towards multiculturalism. Enhancing understanding of out-groups, nurturing empathy, fostering community, minimizing social distance and prejudice, and shaping positive intentions and behaviors contribute to improved interethnic and intercultural relations through intergroup contact.
Most individuals show variation in both their ideal and chosen acculturation strategies across different domains of their lives. For example, among immigrants, it is often easier and more desired to acculturate to their host society's attitudes towards politics and government, than it is to acculturate to new attitudes about religion, principles, values, and customs.
Acculturative stress
The large flux of migrants around the world has sparked scholarly interest in acculturation, and how it can specifically affect health by altering levels of stress, access to health resources, and attitudes towards health. The effects of acculturation on physical health is thought to be a major factor in the immigrant paradox, which argues that first generation immigrants tend to have better health outcomes than non-immigrants. Although this term has been popularized, most of the academic literature supports the opposite conclusion, or that immigrants have poorer health outcomes than their host culture counterparts.
One prominent explanation for the negative health behaviors and outcomes (e.g. substance use, low birth weight) associated with the acculturation process is the acculturative stress theory. Acculturative stress refers to the stress response of immigrants in response to their experiences of acculturation. Stressors can include but are not limited to the pressures of learning a new language, maintaining one's native language, balancing differing cultural values, and brokering between native and host differences in acceptable social behaviors. Acculturative stress can manifest in many ways, including but not limited to anxiety, depression, substance abuse, and other forms of mental and physical maladaptation. Stress caused by acculturation has been heavily documented in phenomenological research on the acculturation of a large variety of immigrants. This research has shown that acculturation is a "fatiguing experience requiring a constant stream of bodily energy," and is both an "individual and familial endeavor" involving "enduring loneliness caused by seemingly insurmountable language barriers".
One important distinction when it comes to risk for acculturative stress is degree of willingness, or migration status, which can differ greatly if one enters a country as a voluntary immigrant, refugee, asylum seeker, or sojourner. According to several studies, voluntary migrants experience roughly 50% less acculturative stress than refugees, making this an important distinction. According to Schwartz (2010), there are four main categories of migrants:
Voluntary immigrants: those that leave their country of origin to find employment, economic opportunity, advanced education, marriage, or to reunite with family members that have already immigrated.
Refugees: those who have been involuntarily displaced by persecution, war, or natural disasters.
Asylum seekers: those who willingly leave their native country to flee persecution or violence.
Sojourners: those who relocate to a new country on a time-limited basis and for a specific purpose. It is important to note that this group fully intends to return to their native country.
This type of entry distinction is important, but acculturative stress can also vary significantly within and between ethnic groups. Much of the scholarly work on this topic has focused on Asian and Latino/a immigrants, however, more research is needed on the effects of acculturative stress on other ethnic immigrant groups. Among U.S. Latinos, higher levels of adoption of the American host culture has been associated with negative effects on health behaviors and outcomes, such as increased risk for depression and discrimination, and increased risk for low self-esteem. However, some individuals also report "finding relief and protection in relationships" and "feeling worse and then feeling better about oneself with increased competencies" during the acculturative process. Again, these differences can be attributed to the age of the immigrant, the manner in which an immigrant exited their home country, and how the immigrant is received by both the original and host cultures. Recent research has compared the acculturative processes of documented Mexican-American immigrants and undocumented Mexican-American immigrants and found significant differences in their experiences and levels of acculturative stress. Both groups of Mexican-American immigrants faced similar risks for depression and discrimination from the host (Americans), but the undocumented group of Mexican-American immigrants also faced discrimination, hostility, and exclusion by their own ethnic group (Mexicans) because of their unauthorized legal status. These studies highlight the complexities of acculturative stress, the degree of variability in health outcomes, and the need for specificity over generalizations when discussing potential or actual health outcomes.
Researchers recently uncovered another layer of complications in this field, where survey data has either combined several ethnic groups together or has labeled an ethnic group incorrectly. When these generalizations occur, nuances and subtleties about a person or group's experience of acculturation or acculturative stress can be diluted or lost. For example, much of the scholarly literature on this topic uses U.S. Census data. The Census incorrectly labels Arab-Americans as Caucasian or "White". By doing so, this data set omits many factors about the Muslim Arab-American migrant experience, including but not limited to acculturation and acculturative stress. This is of particular importance after the events of September 11, 2001, since Muslim Arab-Americans have faced increased prejudice and discrimination, leaving this religious ethnic community with an increased risk of acculturative stress. Research focusing on the adolescent Muslim Arab American experience of acculturation has also found that youth who experience acculturative stress during the identity formation process are at a higher risk for low self-esteem, anxiety, and depression.
Some researchers argue that education, social support, hopefulness about employment opportunities, financial resources, family cohesion, maintenance of traditional cultural values, and high socioeconomic status (SES) serve as protections or mediators against acculturative stress. Previous work shows that limited education, low SES, and underemployment all increase acculturative stress. Since this field of research is rapidly growing, more research is needed to better understand how certain subgroups are differentially impacted, how stereotypes and biases have influenced former research questions about acculturative stress, and the ways in which acculturative stress can be effectively mediated.
Other outcomes
Culture
When individuals of a certain culture are exposed to another culture (host) that is primarily more present in the area that they live, some aspects of the host culture will likely be taken and blended within aspects of the original culture of the individuals. In situations of continuous contact, cultures have exchanged and blended foods, music, dances, clothing, tools, and technologies. This kind of cultural exchange can be related to selective acculturation that refers to the process of maintaining cultural content by researching those individuals' language use, religious belief, and family norms. Cultural exchange can either occur naturally through extended contact, or more quickly though cultural appropriation or cultural imperialism.
Cultural appropriation is the adoption of some specific elements of one culture by members a different cultural group. It can include the introduction of forms of dress or personal adornment, music and art, religion, language, or behavior. These elements are typically imported into the existing culture, and may have wildly different meanings or lack the subtleties of their original cultural context. Because of this, cultural appropriation for monetary gain is typically viewed negatively, and has sometimes been called "cultural theft".
Cultural imperialism is the practice of promoting the culture or language of one nation in another, usually occurring in situations in which assimilation is the dominant strategy of acculturation. Cultural imperialism can take the form of an active, formal policy or a general attitude regarding cultural superiority.
Language
In some instances, acculturation results in the adoption of another country's language, which is then modified over time to become a new, distinct, language. For example, Hanzi, the written language of Chinese language, has been adapted and modified by other nearby cultures, including: Japan (as kanji), Korea (as hanja), and Vietnam (as chữ Hán). Jews, often living as ethnic minorities, developed distinct languages derived from the common languages of the countries in which they lived (for example, Yiddish from High German and Ladino from Old Spanish). Another common effect of acculturation on language is the formation of pidgin languages. Pidgin is a mixed language that has developed to help communication between members of different cultures in contact, usually occurring in situations of trade or colonialism. For example, Pidgin English is a simplified form of English mixed with some of the language of another culture. Some pidgin languages can develop into creole languages, which are spoken as a first language.
Language plays a pivotal role in cultural heritage, serving as both a foundation for group identity and a means for transmitting culture in situations of contact between languages. Language acculturation strategies, attitudes and identities can also influence the sociolinguistic development of languages in bi/multilingual contexts.
Food
Food habits and food consumption are affected by acculturation on different levels. Research has indicated that food habits are discreet and practiced privately, and change occurs slowly. Consumption of new food items is affected by the availability of native ingredients, convenience, and cost; therefore, an immediate change is likely to occur. Aspects of food acculturation include the preparation, presentation, and consumption of food. Different cultures have different ways in which they prepare, serve, and eat their food. When exposed to another culture for an extended period of time, individuals tend to take aspects of the "host" culture's food customs and implement them with their own. In cases such as these, acculturation is heavily influenced by general food knowledge, or knowing the unique kinds of food different cultures traditionally have, the media, and social interaction. It allows for different cultures to be exposed to one another, causing some aspects to intertwine and also become more acceptable to the individuals of each of the respective cultures.
Controversies and debate
Definitions
Anthropologists have made a semantic distinction between group and individual levels of acculturation. In such instances, the term transculturation is used to define individual foreign-origin acculturation, and occurs on a smaller scale with less visible impact. Scholars making this distinction use the term "acculturation" only to address large-scale cultural transactions. Acculturation, then, is the process by which migrants gain new information and insight about the norms and values of their culture and adapt their behaviors to the host culture.
Recommended models
Research has largely indicated that the integrationist model of acculturation leads to the most favorable psychological outcomes and marginalization to the least favorable. While an initial meta-analysis of the acculturation literature found these results to be unclear, a more thorough meta-analysis of 40 studies showed that integration was indeed found to have a "significant, weak, and positive relationship with psychological and sociocultural adjustment". A study was done by John W. Berry (2006) that included 7,997 immigrant adolescents from 13 countries found that immigrant boys tend to have slightly better psychological adaptation than immigrant girls. Overall, immigrants in the integration profile were found to be more well-adapted than those in other profiles. Perceived discrimination was also negatively linked to both psychological and sociocultural adaptation. Various factors can explain the differences in these findings, including how different the two interacting cultures are, and degree of integration difficulty (bicultural identity integration). These types of factors partially explain why general statements about approaches to acculturation are not sufficient in predicting successful adaptation. As research in this area has expanded, one study has identified marginalization as being a maladaptive acculturation strategy.
Typological approach
Several theorists have stated that the fourfold models of acculturation are too simplistic to have predictive validity. Some common criticisms of such models include the fact that individuals don't often fall neatly into any of the four categories, and that there is very little evidence for the applied existence of the marginalization acculturation strategy. In addition, the bi-directionality of acculturation means that whenever two groups are engaged in cultural exchange, there are 16 permutations of acculturation strategies possible (e.g. an integrationist individual within an assimilationist host culture). According to the research, another critic of the fourfold of acculturation is that the people are less likely to cultivate a self-perception but either not assimilate other cultures or continuing the heritage cultures.Rethinking the Concept of Acculturation - PMC The interactive acculturation model represents one proposed alternative to the typological approach by attempting to explain the acculturation process within a framework of state policies and the dynamic interplay of host community and immigrant acculturation orientations.
See also
Naturalization
Acclimatization
Socialization
Deculturalization
Globalization
Nationalization
Acculturation gap
Educational anthropology
Ethnocentrism
Cultural relativism
Cultural conflict
Inculturation
Cultural competence
Language shift
Westernization
Cultural identity
Linguistic imperialism
Intercultural communication
Fusion music
Fusion cuisine
Notes
References
Ward, C. (2001). The A, B, Cs of acculturation. In D. Matsumoto (Ed.) "The handbook of culture and psychology" (pp. 411–445). Oxford, United Kingdom: Oxford University Press.
Cultural studies
Majority–minority relations
Immigration | 0.764178 | 0.99485 | 0.760242 |
Traditional education | Traditional education, also known as back-to-basics, conventional education or customary education, refers to long-established customs that society has traditionally used in schools. Some forms of education reform promote the adoption of progressive education practices, and a more holistic approach which focuses on individual students' needs; academics, mental health, and social-emotional learning. In the eyes of reformers, traditional teacher-centered methods focused on rote learning and memorization must be abandoned in favor of student centered and task-based approaches to learning.
Depending on the context, the opposite of traditional education may be progressive education, modern education (the education approaches based on developmental psychology), or alternative education.
Purposes
The primary purpose of traditional education is to continue passing on those skills, facts, and standards of moral and social conduct that adults consider to be necessary for the next generation's material advancement. As beneficiaries of this plan, which educational progressivist John Dewey described as being "imposed from above and from outside", the students are expected to docilely and obediently receive and believe these fixed answers. Teachers are the instruments by which this knowledge is communicated and these standards of behavior are enforced.
Historically, the primary educational technique of traditional education was simple oral recitation: In a typical approach, students spent some of their time sitting quietly at their places and listening to one student after another recite his or her lesson, until each had been called upon. The teacher's primary activity during such sessions was assigning and listening to these recitations; students studied and memorized the assignments at home. A test or oral examination might be given at the end of a unit, and the process, which was called "assignment–study–recitation–test", was repeated. There was also a reliance on rote memorization (memorization with no effort at understanding the meaning). It is believed that the use of recitation, rote memorization, and unrelated assignments is inefficient and an extremely inefficient use of students' and teachers' time. This traditional approach also insisted that all students be taught the same materials at the same point; students that did not learn quickly enough failed, rather than being allowed to succeed at their natural speeds. This approach, which had been imported from Europe, dominated American education until the end of the 19th century, when the education reform movement imported progressive education techniques from Europe.
Traditional education is associated with much stronger elements of coercion than seems acceptable now in most cultures. It has sometimes included: the use of corporal punishment to maintain classroom discipline or punish errors; inculcating the dominant religion and language; separating students according to gender, race, and social class, as well as teaching different subjects to girls and boys. In terms of curriculum there was and still is a high level of attention paid to time honored academic knowledge.
Current status
In the present, it varies enormously from culture to culture, but still tends to be characterized by a much higher level of coercion than alternative education. Traditional schooling in Britain and its possessions and former colonies tends to follow the English Public School style of strictly enforced uniforms and a militaristic style of discipline. This can be contrasted with South African, US and Australian schools, which can have a much higher tolerance for spontaneous student-to-teacher communication.
Instruction centre
Marking
Subject areas
See also
Classical education movement, which emphasizes Western Civilization
List of abandoned education methods
Curriculum
Notes
Education reform
Curricula
Philosophy of education | 0.767551 | 0.990472 | 0.760237 |
Lingua franca | A lingua franca (; ; for plurals see ), also known as a bridge language, common language, trade language, auxiliary language, or link language, is a language systematically used to make communication possible between groups of people who do not share a native language or dialect, particularly when it is a third language that is distinct from both of the speakers' native languages.
Linguae francae have developed around the world throughout human history, sometimes for commercial reasons (so-called "trade languages" facilitated trade), but also for cultural, religious, diplomatic and administrative convenience, and as a means of exchanging information between scientists and other scholars of different nationalities. The term is taken from the medieval Mediterranean Lingua Franca, a Romance-based pidgin language used especially by traders in the Mediterranean Basin from the 11th to the 19th centuries. A world language—a language spoken internationally and by many people—is a language that may function as a global lingua franca.
Characteristics
Any language regularly used for communication between people who do not share a native language is a lingua franca. Lingua franca is a functional term, independent of any linguistic history or language structure.
Pidgins are therefore lingua francas; creoles and arguably mixed languages may similarly be used for communication between language groups. But lingua franca is equally applicable to a non-creole language native to one nation (often a colonial power) learned as a second language and used for communication between diverse language communities in a colony or former colony.
Lingua francas are often pre-existing languages with native speakers, but they can also be pidgins or creoles developed for that specific region or context. Pidgins are rapidly developed and simplified combinations of two or more established languages, while creoles are generally viewed as pidgins that have evolved into fully complex languages in the course of adaptation by subsequent generations. Pre-existing lingua francas such as French are used to facilitate intercommunication in large-scale trade or political matters, while pidgins and creoles often arise out of colonial situations and a specific need for communication between colonists and indigenous peoples. Pre-existing lingua francas are generally widespread, highly developed languages with many native speakers. Conversely, pidgins are very simplified means of communication, containing loose structuring, few grammatical rules, and possessing few or no native speakers. Creole languages are more developed than their ancestral pidgins, utilizing more complex structure, grammar, and vocabulary, as well as having substantial communities of native speakers.
Whereas a vernacular language is the native language of a specific geographical community, a lingua franca is used beyond the boundaries of its original community, for trade, religious, political, or academic reasons. For example, English is a in the United Kingdom but it is used as a in the Philippines, alongside Filipino. Likewise, Arabic, French, Standard Chinese, Russian and Spanish serve similar purposes as industrial and educational lingua francas across regional and national boundaries.
Even though they are used as bridge languages, international auxiliary languages such as Esperanto have not had a great degree of adoption, so they are not described as lingua francas.
Etymology
The term lingua franca derives from Mediterranean Lingua Franca (also known as Sabir), the pidgin language that people around the Levant and the eastern Mediterranean Sea used as the main language of commerce and diplomacy from the late Middle Ages to the 18th century, most notably during the Renaissance era. During that period, a simplified version of mainly Italian in the eastern Mediterranean and Spanish in the western Mediterranean that incorporated many loanwords from Greek, Slavic languages, Arabic, and Turkish came to be widely used as the "lingua franca" of the region, although some scholars claim that the Mediterranean Lingua Franca was just poorly used Italian.
In Lingua Franca (the specific language), is from the Italian for 'a language'. is related to Greek and Arabic as well as the equivalent Italian—in all three cases, the literal sense is 'Frankish', leading to the direct translation: 'language of the Franks'. During the late Byzantine Empire, Franks was a term that applied to all Western Europeans.
Through changes of the term in literature, lingua franca has come to be interpreted as a general term for pidgins, creoles, and some or all forms of vehicular languages. This transition in meaning has been attributed to the idea that pidgin languages only became widely known from the 16th century on due to European colonization of continents such as The Americas, Africa, and Asia. During this time, the need for a term to address these pidgin languages arose, hence the shift in the meaning of Lingua Franca from a single proper noun to a common noun encompassing a large class of pidgin languages.
As recently as the late 20th century, some restricted the use of the generic term to mean only mixed languages that are used as vehicular languages, its original meaning.
Douglas Harper's Online Etymology Dictionary states that the term Lingua Franca (as the name of the particular language) was first recorded in English during the 1670s, although an even earlier example of the use of it in English is attested from 1632, where it is also referred to as "Bastard Spanish".
Usage notes
The term is well established in its naturalization to English and so major dictionaries do not italicize it as a "foreign" term.
Its plurals in English are lingua francas and linguae francae, with the former being first-listed or only-listed in major dictionaries.
Examples
Historical lingua francas
The use of lingua francas has existed since antiquity.
Akkadian remained the common language of a large part of Western Asia from several earlier empires, until it was supplanted in this role by Aramaic.
Sanskrit historically served as a lingua franca throughout the majority of South Asia. The Sanskrit language's historic presence is attested across a wide geography beyond South Asia. Inscriptions and literary evidence suggest that Sanskrit was already being adopted in Southeast Asia and Central Asia in the 1st millennium CE, through monks, religious pilgrims and merchants.
Until the early 20th century, Literary Chinese served as both the written lingua franca and the diplomatic language in East Asia, including China, Korea, Japan, Ryūkyū, and Vietnam. In the early 20th century, vernacular written Chinese replaced Classical Chinese within China as both the written and spoken lingua franca for speakers of different Chinese dialects, and because of the declining power and cultural influence of China in East Asia, English has since replaced Classical Chinese as the lingua franca in East Asia.
Koine Greek was the lingua franca of the Hellenistic culture. Koine Greek (Modern ; ), also known as Alexandrian dialect, common Attic, Hellenistic, or Biblical Greek, was the common supra-regional form of Greek spoken and written during the Hellenistic period, the Roman Empire and the early Byzantine Empire. It evolved from the spread of Greek following the conquests of Alexander the Great in the fourth century BC, and served as the lingua franca of much of the Mediterranean region and the Middle East during the following centuries.
Old Tamil was once the lingua franca for most of ancient Tamilakam and Sri Lanka. John Guy states that Tamil was also the lingua franca for early maritime traders from India. The language and its dialects were used widely in the state of Kerala as the major language of administration, literature and common usage until the 12th century AD. Tamil was also used widely in inscriptions found in the southern Andhra Pradesh districts of Chittoor and Nellore until the 12th century AD. Tamil was used for inscriptions from the 10th through 14th centuries in southern Karnataka districts such as Kolar, Mysore, Mandya and Bangalore.
Latin, through the power of the Roman Republic, became the dominant language in Italy and subsequently throughout the realms of the Roman Empire. Even after the Fall of the Western Roman Empire, Latin was the common language of communication, science, and academia in Europe until well into the 18th century, when other regional vernaculars (including its own descendants, the Romance languages) supplanted it in common academic and political usage, and it eventually became a dead language in the modern linguistic definition.
Classical Māori is the retrospective name for the language (formed out of many dialects, albeit all mutually intelligible) of both the North Island and the South Island for the 800 years before the European settlement of New Zealand. Māori shared a common language that was used for trade, inter-iwi dialogue on marae, and education through wānanga. After the signing of the Treaty of Waitangi, Māori language was the lingua franca of the Colony of New Zealand until English superseded it in the 1870s. The description of Māori language as New Zealand's 19th-century lingua franca has been widely accepted. The language was initially vital for all European and Chinese migrants in New Zealand to learn, as Māori formed a majority of the population, owned nearly all the country's land and dominated the economy until the 1860s. Discriminatory laws such as the Native Schools Act 1867 contributed to the demise of Māori language as a lingua franca.
Sogdian was used to facilitate trade between those who spoke different languages along the Silk Road, which is why native speakers of Sogdian were employed as translators in Tang China. The Sogdians also ended up circulating spiritual beliefs and texts, including those of Buddhism and Christianity, thanks to their ability to communicate to many people in the region through their native language.
Old Church Slavonic, an Eastern South Slavic language, is the first Slavic literary language. Between 9th and 11th century, it was the lingua franca of a great part of the predominantly Slavic states and populations in Southeast and Eastern Europe, in liturgy and church organization, culture, literature, education and diplomacy, as an Official language and National language in the case of Bulgaria. It was the first national and also international Slavic literary language (autonym , ). The Glagolitic alphabet was originally used at both schools, though the Cyrillic script was developed early on at the Preslav Literary School, where it superseded Glagolitic as the official script in Bulgaria in 893. Old Church Slavonic spread to other South-Eastern, Central, and Eastern European Slavic territories, most notably Croatia, Serbia, Bohemia, Lesser Poland, and principalities of the Kievan Rus' while retaining characteristically South Slavic linguistic features. It spread also to not completely Slavic territories between the Carpathian Mountains, the Danube and the Black sea, corresponding to Wallachia and Moldavia. Nowadays, the Cyrillic writing system is used for various languages across Eurasia, and as the national script in various Slavic, Turkic, Mongolic, Uralic, Caucasian and Iranic-speaking countries in Southeastern Europe, Eastern Europe, the Caucasus, Central, North, and East Asia.
The Mediterranean Lingua Franca was largely based on Italian and Provençal. This language was spoken from the 11th to 19th centuries around the Mediterranean basin, particularly in the European commercial empires of Italian cities (Genoa, Venice, Florence, Milan, Pisa, Siena) and in trading ports located throughout the eastern Mediterranean rim.
During the Renaissance, standard Italian was spoken as a language of culture in the main royal courts of Europe, and among intellectuals. This lasted from the 14th century to the end of the 16th, when French replaced Italian as the usual lingua franca in northern Europe. Italian musical terms, in particular dynamic and tempo notations, have continued in use to the present day.
Classical Quechua is either of two historical forms of Quechua, the exact relationship and degree of closeness between which is controversial, and which have sometimes been identified with each other. These are:
the variety of Quechua that was used as a lingua franca and administrative language in the Inca Empire (1438–1533) (or Inca lingua franca). Since the Incas didn't have writing, the evidence about the characteristics of this variety is scant and they have been a subject of significant disagreements.
the variety of Quechua that was used in writing for religious and administrative purposes in the Andean territories of the Spanish Empire, mostly in the late 16th century and the first half of the 17th century and has sometimes been referred to, both historically and in academia, as lengua general ('common language') (or Standard Colonial Quechua).
Ajem-Turkic functioned as lingua franca in the Caucasus region and in southeastern Dagestan, and was widely spoken at the court and in the army of Safavid Iran.
Modern
English
English is sometimes described as the foremost global lingua franca, being used as a working language by individuals of diverse linguistic and cultural backgrounds in a variety of fields and international organizations to communicate with one another. English is the most spoken language in the world, primarily due to the historical global influence of the British Empire and the United States. It is a co-official language of the United Nations and many other international and regional organizations and has also become the de facto language of diplomacy, science, international trade, tourism, aviation, entertainment and the internet.
When the United Kingdom became a colonial power, English served as the lingua franca of the colonies of the British Empire. In the post-colonial period, most of the newly independent nations which had many indigenous languages opted to continue using English as one of their official languages such as Ghana and South Africa. In other former colonies with several official languages such as Singapore and Fiji, English is the primary medium of education and serves as the lingua franca among citizens.
Even in countries not associated with the English-speaking world, English has emerged as a lingua franca in certain situations where its use is perceived to be more efficient to communicate, especially among groups consisting of native speakers of many languages. In Qatar, the medical community is primarily made up of workers from countries without English as a native language. In medical practices and hospitals, nurses typically communicate with other professionals in English as a lingua franca. This occurrence has led to interest in researching the consequences of the medical community communicating in a lingua franca. English is also sometimes used in Switzerland between people who do not share one of Switzerland's four official languages, or with foreigners who are not fluent in the local language.
In the European Union, the use of English as a lingua franca has led researchers to investigate whether a Euro English dialect has emerged. In the fields of technology and science, English emerged as a lingua franca in the 20th century.
Spanish
The Spanish language spread mainly throughout the New World, becoming a lingua franca in the territories and colonies of the Spanish Empire, which also included parts of Africa, Asia, and Oceania. After the breakup of much of the empire in the Americas, its function as a lingua franca was solidified by the governments of the newly independent nations of what is now Hispanic America. While its usage in Spain's Asia-Pacific colonies has largely died out except in the Philippines, where it is still spoken by a small minority, Spanish became the lingua franca of what is now Equatorial Guinea, being the main language of government and education and is spoken by the vast majority of the population.
Due to large numbers of immigrants from Latin America in the second half of the 20th century and resulting influence, Spanish has also emerged somewhat as a lingua franca in parts of the Southwestern United States and southern Florida, especially in communities where native Spanish speakers form the majority of the population.
At present it is the second most used language in international trade, and the third most used in politics, diplomacy and culture after English and French.
It is also one of the most taught foreign languages throughout the world and is also one of the six official languages of the United Nations.
French
French is sometimes regarded as the first global lingua franca, having supplanted Latin as the prestige language of politics, trade, education, diplomacy, and military in early modern Europe and later spreading around the world with the establishment of the French colonial empire. With France emerging as the leading political, economic, and cultural power of Europe in the 16th century, the language was adopted by royal courts throughout the continent, including the United Kingdom, Sweden, and Russia, and as the language of communication between European academics, merchants, and diplomats. With the expansion of Western colonial empires, French became the main language of diplomacy and international relations up until World War II when it was replaced by English due the rise of the United States as the leading global superpower. Stanley Meisler of the Los Angeles Times said that the fact that the Treaty of Versailles was written in English as well as French was the "first diplomatic blow" against the language. Nevertheless, it remains the second most used language in international affairs and is one of the six official languages of the United Nations.
As a legacy of French and Belgian colonial rule, most former colonies of these countries maintain French as an official language or lingua franca due to the many indigenous languages spoken in their territory. Notably, in most Francophone West and Central African countries, French has transitioned from being only a lingua franca to the native language among some communities, mostly in urban areas or among the elite class. In other regions such as the French-speaking countries of the Maghreb (Algeria, Tunisia, Morocco, and Mauritania) and parts of the French Caribbean, French is the lingua franca in professional sectors and education, even though it is not the native language of the majority.
French continues to be used as a lingua franca in certain cultural fields such as cuisine, fashion, and sport.
As a consequence of Brexit, French has been increasingly used as a lingua franca in the European Union and its institutions either alongside or at times, in place, of English.
German
German is used as a lingua franca in Switzerland to some extent; however, English is generally preferred to avoid favoring it over the three other official languages. Middle Low German used to be the Lingua franca during the late Hohenstaufen till the mid-15th century periods, in the North Sea and the Baltic Sea when extensive trading was done by the Hanseatic League along the Baltic and North Seas. German remains a widely studied language in Central Europe and the Balkans, especially in former Yugoslavia. It is recognized as an official language in countries outside of Europe, specifically Namibia. German is also one of the working languages of the EU along English and French, but it is used less in that role than the other two.
Chinese
Today, Standard Mandarin Chinese is the lingua franca of China and Taiwan, which are home to many mutually unintelligible varieties of Chinese and, in the case of Taiwan, indigenous Formosan languages. Among many Chinese diaspora communities, Cantonese is often used as the lingua franca instead, particularly in Southeast Asia, due to a longer history of immigration and trade networks with southern China, although Mandarin has also been adopted in some circles since the 2000s.
Arabic
Arabic was used as a lingua franca across the Islamic empires, whose sizes necessitated a common language, and spread across the Arab and Muslim worlds. In Djibouti and parts of Eritrea, both of which are countries where multiple official languages are spoken, Arabic has emerged as a lingua franca in part thanks to the population of the region being predominantly Muslim and Arabic playing a crucial role in Islam. In addition, after having fled from Eritrea due to ongoing warfare and gone to some of the nearby Arab countries, Eritrean emigrants are contributing to Arabic becoming a lingua franca in the region by coming back to their homelands having picked up the Arabic language.
Russian
Russian is in use and widely understood in Central Asia and the Caucasus, areas formerly part of the Russian Empire and Soviet Union. Its use remains prevalent in many post-Soviet states. Russian has some presence as a minority language in the Baltic states and some other states in Eastern Europe, as well as in pre-opening China. It remains the official language of the Commonwealth of Independent States. Russian is also one of the six official languages of the United Nations. Since the collapse of the Soviet Union, its use has declined in post-Soviet states. Parts of the Russian speaking minorities outside Russia have emigrated to Russia or assimilated into their countries of residence by learning the local language and using it preferably in daily communication.
In Central European countries that were members of the Warsaw Pact, where Russian was only a political language used in international communication and where there was no Russian minority, the Russian language practically does not exist, and in schools it was replaced by English as the primary foreign language.
Portuguese
Portuguese served as lingua franca in the Portuguese Empire, Africa, South America and Asia in the 15th and 16th centuries. When the Portuguese started exploring the seas of Africa, America, Asia and Oceania, they tried to communicate with the natives by mixing a Portuguese-influenced version of lingua franca with the local languages. When Dutch, English or French ships came to compete with the Portuguese, the crews tried to learn this "broken Portuguese". Through a process of change the lingua franca and Portuguese lexicon was replaced with the languages of the people in contact. Portuguese remains an important lingua franca in the Portuguese-speaking African countries, East Timor, and to a certain extent in Macau where it is recognized as an official language alongside Chinese though in practice not commonly spoken. Portuguese and Spanish have a certain degree of mutual intelligibility and mixed languages such as Portuñol are used to facilitate communication in areas like the border area between Brazil and Uruguay.
Hindustani
The Hindustani language, with Hindi and Urdu as dual standard varieties, serves as the lingua franca of Pakistan and Northern India. Many Hindi-speaking North Indian states have adopted the three-language formula in which students are taught: "(a) Hindi (with Sanskrit as part of the composite course); (b) Any other modern Indian language including Urdu and (c) English or any other modern European language." The order in non-Hindi speaking states is: "(a) the major language of the state or region; (b) Hindi; (c) Any other modern Indian language including Urdu but excluding (a) and (b) above; and (d) English or any other modern European language." Hindi has also emerged as a lingua franca in Arunachal Pradesh, a linguistically diverse state in Northeast India. It is estimated that nine-tenths of the state's population knows Hindi.
Urdu is the lingua franca of Pakistan and had gained significant influence amongst its people, administration and education. While it shares official status with English, Urdu is the preferred and dominant language used for inter-communication between different ethnic groups of Pakistan.
Malay
Malay is understood across a cultural region in Southeast Asia called the "Malay world" including Brunei, Indonesia, Malaysia, Singapore, southern Thailand, and certain parts of the Philippines. It is pluricentric, with several nations codifying a local vernacular variety into several national literary standards: Indonesia notably adopts a variant spoken in Riau specifically as the basis for "Indonesian" for national use despite Javanese having more native speakers; this standard is the sole official language spoken throughout the vast country despite being the first language of some Indonesians.
Swahili
Swahili developed as a lingua franca between several Bantu-speaking tribal groups on the east coast of Africa with heavy influence from Arabic. The earliest examples of writing in Swahili are from 1711. In the early 19th century the use of Swahili as a lingua franca moved inland with the Arabic ivory and slave traders. It was eventually adopted by Europeans as well during periods of colonization in the area. German colonizers used it as the language of administration in German East Africa, later becoming Tanganyika, which influenced the choice to use it as a national language in what is now independent Tanzania. Swahili is currently one of the national languages and it is taught in schools and universities in several East African countries, thus prompting it to be regarded as a modern-day lingua franca by many people in the region. Several Pan-African writers and politicians have unsuccessfully called for Swahili to become the lingua franca of Africa as a means of unifying the African continent and overcoming the legacy of colonialism.
Persian
Persian, an Iranian language, is the official language of Iran, Afghanistan (Dari) and Tajikistan (Tajik). It acts as a lingua franca in both Iran and Afghanistan between the various ethnic groups in those countries. The Persian language in South Asia, before the British colonized the Indian subcontinent, was the region's lingua franca and a widely used official language in north India and Pakistan.
Hausa
Hausa can also be seen as a lingua franca because it is the language of communication between speakers of different languages in Northern Nigeria and other West African countries, including the northern region of Ghana.
Amharic
Amharic is the lingua franca and most widely spoken language in Ethiopia, and is known by most people who speak another Ethiopian language.
Creole languages
Creoles, such as Nigerian Pidgin in Nigeria, are used as lingua francas across the world. This is especially true in Africa, the Caribbean, Melanesia, Southeast Asia and in parts of Australia by Indigenous Australians.
Sign languages
The majority of pre-colonial North American nations communicated internationally using Hand Talk. Also called Prairie Sign Language, Plains Indian Sign Language, or First Nations Sign Language, this language functioned predominantly—and still continues to function—as a second language within most of the (now historical) countries of the Great Plains, from Newe Segobia in the West to Anishinaabewaki in the East, down into what are now the northern states of Mexico and up into Cree Country stopping before Denendeh. The relationship remains unknown between Hand Talk and other manual Indigenous languages like Keresan Sign Language and Plateau Sign Language, the latter of which is now extinct (though Ktunaxa Sign Language is still used). Although unrelated, perhaps Inuit Sign Language played and continues to play a similar role across Inuit Nunangat and the various Inuit dialects. The original Hand Talk is found across Indian Country in pockets, but it has also been employed to create new or revive old languages, such as with Oneida Sign Language.
International Sign, though a pidgin language, is present at most significant international gatherings, from which interpretations of national sign languages are given, such as in LSF, ASL, BSL, Libras, or Auslan. International Sign, or IS and formerly Gestuno, interpreters can be found at many European Union parliamentary or committee sittings, during certain United Nations affairs, conducting international sporting events like the Deaflympics, in all World Federation of the Deaf functions, and across similar settings. The language has few set internal grammatical rules, instead co-opting national vocabularies of the speaker and audience, and modifying the words to bridge linguistic gaps, with heavy use of gestures and classifiers.
See also
Rosetta Stone
Global language system
Language contact
List of languages by number of native speakers
List of languages by total number of speakers
List of languages by the number of countries in which they are recognized as an official language
Interlinguistics
Universal language
Working language
References
Further reading
External links
from Juan del Encina, Le Bourgeois Gentilhomme, Carlo Goldoni's L'Impresario da Smyrna, Diego de Haedo and other sources
Lingua francas
Languages by place in society
Interlinguistics
Italian words and phrases | 0.760636 | 0.999459 | 0.760224 |
SMART criteria | S.M.A.R.T. (or SMART) is an acronym used as a mnemonic device to establish criteria for effective goal-setting and objective development. This framework is commonly applied in various fields, including project management, employee performance management, and personal development. The term was first proposed by George T. Doran in the November 1981 issue of 'Management Review', where he advocated for setting objectives that are Specific, Measurable, Assignable, Realistic, and Time-bound—hence the acronym S.M.A.R.T.
Since its inception, the SMART framework has evolved, leading to the emergence of different variations of the acronym. Commonly used versions incorporate alternative words, including 'attainable,' 'relevant,' and 'timely.' Additionally, several authors have introduced supplementary letters to the acronym. For instance, some refer to SMARTS goals, which include the element of 'self-defined,' while others utilize SMARTER goals.
Proponents of SMART objectives argue that these criteria facilitate a clear framework for goal setting and evaluation, applicable across various contexts such as business (between employee and employer) and sports (between athlete and coach). This framework enables the individual setting the goal to have a precise understanding of the expected outcomes, while the evaluator has concrete criteria for assessment. The SMART acronym is linked to Peter Drucker's Management by Objectives (MBO) concept, illustrating its foundational role in strategic planning and performance management.
History
In the November 1981 issue of Management Review (AMA Forum), George T. Doran's paper titled "There's a S.M.A.R.T. way to write management's goals and objectives" introduces a framework for setting management objectives, emphasizing the importance of clear goals. The S.M.A.R.T. criteria he proposes are as follows:
Specific: Targeting a particular area for improvement.
Measurable: Quantifying, or at least suggesting, an indicator of progress.
Assignable: Defining responsibility clearly.
Realistic: Outlining attainable results with available resources.
Time-related: Including a timeline for expected results
Doran clarifies that it's not always feasible to quantify objectives at all management levels, particularly for middle-management roles. He argues for the value in balancing quantifiable objectives with more abstract goals to formulate a comprehensive action plan. This emphasizes the integration of objectives with their execution plans as the foundation of effective management.
Common usage
S.M.A.R.T. goals and objectives are key concepts in planning and project management. The acronym, while consistently used, applies differently to goals and objectives. Goals define the broad outcomes intended from a project or assignment, and objectives specify the actionable steps aimed at achieving these outcomes.There is acknowledgment of some variation in the application of the framework, reflecting a range of interpretations in practice.
Effectiveness
Research suggests that the effectiveness of the SMART goal-setting framework can vary depending on the context in which it is applied, indicating that its universal application might not always yield effective outcomes.
Career goals
A Michigan State University Extension study highlighted the effectiveness of the SMART goal-setting approach. It showed that individuals who wrote down their goals and outlined action steps had a 76% success rate in achieving them, especially when they shared weekly updates with a friend. This was compared to a 43% success rate for those who didn't document their goals, indicating an advantage to the structured approach of SMART goal-setting.
Physical activity
A review of literature indicates mixed effectiveness of the SMART acronym for increasing physical activity. Criticisms focus on its lack of scientific basis and empirical support, suggesting non-specific, open-ended goals might be more beneficial for some individuals. Research indicates that vague or challenging goals could be more effective than specific ones for increasing physical activity. Swann et al. highlight the original SMART framework's absence of theoretical or empirical foundation, contrasting with broader goal-setting research.
Variations
The SMART framework has been expanded by some authors to include additional criteria, enhancing its versatility and application. Examples of these extensions are:
SMARTER
Evaluated and reviewed
Evaluate consistently and recognize mastery
Exciting and Recorded
SMARTIE
Equity and inclusion
SMARTTA
Trackable and agreed
SMARTA
agreed
SMARRT
Realistic and relevance – 'Realistic' refers to something that can be done given the available resources. 'Relevance' ensures the goal is in line with the bigger picture and vision.
I-SMART
A social goal or objective which demonstrates "Impact"
Alternative acronyms
Other mnemonic acronyms (or contractions) also give criteria to guide in the setting of objectives.
CLEAR: Collaborative; Limited; Emotional; Appreciable; Refinable
PURE: Positively stated; Understood; Relevant; Ethical
CPQQRT: Context; Purpose; Quantity; Quality; Resources; Timing
ABC: Achievable; Believable; Committed
FAST: Frequently discussed; Ambitious; Specific; Transparent
See also
Management by objectives
PDCA
Performance indicator
Strategic planning
Notes
References
Project management
Acronyms
Mnemonics
Goal | 0.760713 | 0.999352 | 0.76022 |
Typology | Typology is the study of various traits and types, or the systematic classification of the types of something according to their common characteristics. Typology is the act of finding, counting and classifying facts with the help of eyes, other senses and logic. Typology may refer to:
Typology (anthropology), human anatomical categorization based on morphological traits
Typology (archaeology), classification of artefacts according to their characteristics
Typology (linguistics), study and classification of languages according to their structural features
Morphological typology, a method of classifying languages
Typology (psychology), a model of personality types
Psychological typologies, classifications used by psychologists to describe the distinctions between people
Typology (statistics), a concept in statistics, research design and social sciences
Typology (theology), the Christian interpretation of some figures and events in the Old Testament as foreshadowing the New Testament
Typology (urban planning and architecture), the classification of characteristics common to buildings or urban spaces
Building typology, relating to buildings and architecture
Farm typology, farm classification by the USDA
Sociopolitical typology, four types, or levels, of a political organization
See also
The Bechers' photographic typologies
Blanchard's transsexualism typology, a controversial classification of trans women
Johnson's Typology, a classification of intimate partner violence (IPV)
Topology (disambiguation)
Type (disambiguation)
Typification, a process of creating standard (typical) social construction based on standard assumptions
Typology of Greek vase shapes, classification of Greek vases
Typography, the art and technique of arranging type to make written language legible, readable and appealing when displayed | 0.768525 | 0.989189 | 0.760216 |
Big Five personality traits | In trait theory, the Big Five personality traits (sometimes known as the five-factor model of personality or OCEAN or CANOE models) are a group of five characteristics used to study personality:
openness to experience (inventive/curious vs. consistent/cautious)
conscientiousness (efficient/organized vs. extravagant/careless)
extraversion (outgoing/energetic vs. solitary/reserved)
agreeableness (friendly/compassionate vs. critical/judgmental)
neuroticism (sensitive/nervous vs. resilient/confident)
The Big Five traits did not arise from studying an existing theory of personality, but rather, they were an empirical finding in early lexical studies that English personality-descriptive adjectives clustered together under factor analysis into five unique factors. The factor analysis indicates that these five factors can be measured, but further studies have suggested revisions and critiques of the model. Cross-language studies have found a sixth Honesty-Humility factor, suggesting a replacement by the HEXACO model of personality structure. A study of short-form constructs found that the agreeableness and openness constructs were ill-defined in a larger population, suggesting that these traits should be dropped and replaced by more specific dimensions. In addition, the labels such as "neuroticism" are ill-fitting, and the traits are more properly thought of as unnamed dimensions, "Factor A", "Factor B", and so on.
Despite these issues with its formulation, the five-factor approach has been enthusiastically and internationally embraced, becoming central to much of contemporary personality research. Many subsequent factor analyses, variously formulated and expressed in a variety of languages, have repeatedly reported the finding of five largely similar factors. The five-factor approach has been portrayed as a fruitful, scientific achievement―a fundamental advance in the understanding of human personality. Some have claimed that the five factors of personality are "an empirical fact, like the fact that there are seven continents on earth and eight American Presidents from Virginia". Others such as Jack Block have expressed concerns over the uncritical acceptance of the approach.
History
The Big Five model was built on understanding the relationship between personality and academic behaviour. It was defined by several independent sets of researchers who analysed words describing people's behaviour. These researchers first studied relationships between many words related to personality traits. They made lists of these words shorter by 5–10 times and then used factor analysis to group the remaining traits (with data mostly based upon people's estimations, in self-report questionnaires and peer ratings) to find the basic factors of personality.
The initial model was advanced in 1958 by Ernest Tupes and Raymond Christal, research psychologists at Lackland Air Force Base in Texas, but failed to reach scholars and scientists until the 1980s. In 1990, J.M. Digman advanced his five-factor model of personality, which Lewis Goldberg put at the highest organised level. These five overarching domains have been found to contain most known personality traits and are assumed to represent the basic structure behind them all.
At least four sets of researchers have worked independently for decades to reflect personality traits in language and have mainly identified the same five factors: Tupes and Christal were first, followed by Goldberg at the Oregon Research Institute, Cattell at the University of Illinois, and finally Costa and McCrae. These four sets of researchers used somewhat different methods in finding the five traits, making the sets of five factors have varying names and meanings. However, all have been found to be strongly correlated with their corresponding factors. Studies indicate that the Big Five traits are not nearly as powerful in predicting and explaining actual behaviour as the more numerous facets or primary traits.
Each of the Big Five personality traits contains two separate, but correlated, aspects reflecting a level of personality below the broad domains but above the many facet scales also making up part of the Big Five. The aspects are labelled as follows: Volatility and Withdrawal for Neuroticism; Enthusiasm and Assertiveness for Extraversion; Intellect and Openness for Openness to Experience; Industriousness and Orderliness for Conscientiousness; and Compassion and Politeness for Agreeableness.
Finding the five factors
In 1884, British scientist Sir Francis Galton became the first person known to consider deriving a comprehensive taxonomy of human personality traits by sampling language. The idea that this may be possible is known as the lexical hypothesis. In 1936, American psychologists Gordon Allport of Harvard University and Henry Odbert of Dartmouth College implemented Galton's hypothesis. They organised for three anonymous people to categorise adjectives from Webster's New International Dictionary and a list of common slang words. The result was a list of 4504 adjectives they believed were descriptive of observable and relatively permanent traits.
In 1943, Raymond Cattell of Harvard University took Allport and Odbert's list and reduced this to a list of roughly 160 terms by eliminating words with very similar meanings. To these, he added terms from 22 other psychological categories, and additional "interest" and "abilities" terms. This resulted in a list of 171 traits. From this he used factor analysis to derive 60 "personality clusters or syndromes" and an additional 7 minor clusters. Cattell then narrowed this down to 35 terms, and later added a 36th factor in the form of an IQ measure. Through factor analysis from 1945 to 1948, he created 11 or 12 factor solutions.
In 1947, Hans Eysenck of University College London published his book Dimensions of Personality. He posited that the two most important personality dimensions were "Extraversion" and "Neuroticism", a term that he coined.
In July 1949, Donald Fiske of the University of Chicago used 22 terms either adapted from Cattell's 1947 study, and through surveys of male university students and statistics derived five factors: "Social Adaptability", "Emotional Control", "Conformity", "Inquiring Intellect", and "Confident Self-expression". In the same year, Cattell, with Maurice Tatsuoka and Herbert Eber, found 4 additional factors, which they believed consisted of information that could only be provided through self-rating. With this understanding, they created the sixteen factor 16PF Questionnaire.
In 1953, John W French of Educational Testing Service published an extensive meta-analysis of personality trait factor studies.
In 1957, Ernest Tupes of the United States Air Force undertook a personality trait study of US Air Force officers. Each was rated by their peers using Cattell's 35 terms (or in some cases, the 30 most reliable terms). In 1958, Tupes and Raymond Christal began a US Air Force study by taking 37 personality factors and other data found in Cattell's 1947 paper, Fiske's 1949 paper, and Tupes' 1957 paper. Through statistical analysis, they derived five factors they labeled "Surgency", "Agreeableness", "Dependability", "Emotional Stability", and "Culture". In addition to the influence of Cattell and Fiske's work, they strongly noted the influence of French's 1953 study. Tupes and Christal further tested and explained their 1958 work in a 1961 paper.
Warren Norman of the University of Michigan replicated Tupes and Christal's work in 1963. He relabeled "Surgency" as "Extroversion or Surgency", and "Dependability" as "Conscientiousness". He also found four subordinate scales for each factor. Norman's paper was much more read than Tupes and Christal's papers had been. Norman's later Oregon Research Institute colleague Lewis Goldberg continued this work.
In the 4th edition of the 16PF Questionnaire released in 1968, 5 "global factors" derived from the 16 factors were identified: "Extraversion", "Independence", "Anxiety", "Self-control" and "Tough-mindedness". 16PF advocates have since called these "the original Big 5".
Hiatus in research
During the 1970s, the changing zeitgeist made publication of personality research difficult. In his 1968 book Personality and Assessment, Walter Mischel asserted that personality instruments could not predict behavior with a correlation of more than 0.3. Social psychologists like Mischel argued that attitudes and behavior were not stable, but varied with the situation. Predicting behavior from personality instruments was claimed to be impossible.
Renewed attention
In 1978, Paul Costa and Robert McCrae of the National Institutes of Health published a book chapter describing their Neuroticism-Extroversion-Openness (NEO) model. The model was based on the three factors in its name. They used Eysenck's concept of "Extroversion" rather than Carl Jung's. Each factor had six facets. The authors expanded their explanation of the model in subsequent papers.
Also in 1978, British psychologist Peter Saville of Brunel University applied statistical analysis to 16PF results, and determined that the model could be reduced to five factors, "Anxiety", "Extraversion", "Warmth", "Imagination" and "Conscientiousness".
At a 1980 symposium in Honolulu, Lewis Goldberg, Naomi Takemoto-Chock, Andrew Comrey, and John M. Digman, reviewed the available personality instruments of the day. In 1981, Digman and Takemoto-Chock of the University of Hawaii reanalysed data from Cattell, Tupes, Norman, Fiske and Digman. They re-affirmed the validity of the five factors, naming them "Friendly Compliance vs. Hostile Non-compliance", "Extraversion vs. Introversion", "Ego Strength vs. Emotional Disorganization", "Will to Achieve" and "Intellect". They also found weak evidence for the existence of a sixth factor, "Culture".
Peter Saville and his team included the five-factor "Pentagon" model as part of the Occupational Personality Questionnaires (OPQ) in 1984. This was the first commercially available Big Five test. Its factors are "Extroversion", "Vigorous", "Methodical", "Emotional Stability", and "Abstract".
This was closely followed by another commercial test, the NEO PI three-factor personality inventory, published by Costa and McCrae in 1985. It used the three NEO factors. The methodology employed in constructing the NEO instruments has since been subject to critical scrutiny.
Emerging methodologies increasingly confirmed personality theories during the 1980s. Though generally failing to predict single instances of behavior, researchers found that they could predict patterns of behavior by aggregating large numbers of observations. As a result, correlations between personality and behavior increased substantially, and it became clear that "personality" did in fact exist.
In 1992, the NEO PI evolved into the NEO PI-R, adding the factors "Agreeableness" and "Conscientiousness", and becoming a Big Five instrument. This set the names for the factors that are now most commonly used. The NEO maintainers call their model the "Five Factor Model" (FFM). Each NEO personality dimension has six subordinate facets.
Subsequent developments
Wim Hofstee at the University of Groningen used a lexical hypothesis approach with the Dutch language to develop what became the International Personality Item Pool in the 1990s. Further development in Germany and the United States saw the pool based on three languages. Its questions and results have been mapped to various Big Five personality typing models.
Kibeom Lee and Michael Ashton released a book describing their HEXACO model in 2004. It adds a sixth factor, "Honesty-Humility" to the five (which it calls "Emotionality", "Extraversion", "Agreeableness", "Conscientiousness", and "Openness to Experience"). Each of these factors has four facets.
In 2007, Colin DeYoung, Lena C. Quilty and Jordan Peterson concluded that the 10 aspects of the Big Five may have distinct biological substrates. This was derived through factor analyses of two data samples with the International Personality Item Pool, followed by cross-correlation with scores derived from 10 genetic factors identified as underlying the shared variance among the Revised NEO Personality Inventory facets.
By 2009, personality and social psychologists generally agreed that both personal and situational variables are needed to account for human behavior.
A FFM-associated test was used by Cambridge Analytica, and was part of the "psychographic profiling" controversy during the 2016 US presidential election.
Descriptions of the particular personality traits
When factor analysis is applied to personality survey data, semantic associations between aspects of personality and specific terms are often applied to the same person. For example, someone described as conscientious is more likely to be described as "always prepared" rather than "messy". These associations suggest five broad dimensions used in common language to describe the human personality, temperament, and psyche.
Beneath each proposed global factor, there are a number of correlated and more specific primary factors. For example, extraversion is typically associated with qualities such as gregariousness, assertiveness, excitement-seeking, warmth, activity, and positive emotions. These traits are not black and white; each one is treated as a spectrum.
Openness to experience
Openness to experience is a general appreciation for art, emotion, adventure, unusual ideas, imagination, curiosity, and variety of experience. People who are open to experience are intellectually curious, open to emotion, sensitive to beauty, and willing to try new things. They tend to be, when compared to closed people, more creative and more aware of their feelings. They are also more likely to hold unconventional beliefs. Open people can be perceived as unpredictable or lacking focus, and more likely to engage in risky behaviour or drug-taking. Moreover, individuals with high openness are said to pursue self-actualisation specifically by seeking out intense, euphoric experiences. Conversely, those with low openness want to be fulfilled by persevering and are characterised as pragmatic and data-drivensometimes even perceived to be dogmatic and closed-minded. Some disagreement remains about how to interpret and contextualise the openness factor as there is a lack of biological support for this particular trait. Openness has not shown a significant association with any brain regions as opposed to the other four traits which did when using brain imaging to detect changes in volume associated with each trait.
Sample items
I have a rich vocabulary.
I have a vivid imagination.
I have excellent ideas.
I am quick to understand things.
I use difficult words.
I spend time reflecting on things.
I am full of ideas.
I have difficulty understanding abstract ideas. (Reversed)
I am not interested in abstract ideas. (Reversed)
I do not have a good imagination. (Reversed)
Conscientiousness
Conscientiousness is a tendency to be self-disciplined, act dutifully, and strive for achievement against measures or outside expectations. It is related to people's level of impulse control, regulation, and direction. High conscientiousness is often perceived as being stubborn and focused. Low conscientiousness is associated with flexibility and spontaneity, but can also appear as sloppiness and lack of reliability. High conscientiousness indicates a preference for planned rather than spontaneous behaviour.
Sample items
I am always prepared.
I pay attention to details.
I get chores done right away.
I follow a schedule.
I am exacting in my work.
I do not like order. (Reversed)
I leave my belongings around. (Reversed)
I make a mess of things. (Reversed)
I often forget to put things back in their proper place. (Reversed)
I shirk my duties. (Reversed)
Extraversion
Extraversion is characterised by breadth of activities (as opposed to depth), surgency from external activities/situations, and energy creation from external means. The trait is marked by pronounced engagement with the external world. Extraverts enjoy interacting with people, and are often perceived as energetic. They tend to be enthusiastic and action-oriented. They possess high group visibility, like to talk, and assert themselves. Extraverts may appear more dominant in social settings, as opposed to introverts in that setting.
Introverts have lower social engagement and energy levels than extraverts. They tend to seem quiet, low-key, deliberate, and less involved in the social world. Their lack of social involvement should not be interpreted as shyness or depression, but as greater independence of their social world than extraverts. Introverts need less stimulation and more time alone than extraverts. This does not mean that they are unfriendly or antisocial; rather, they are aloof and reserved in social situations.
Generally, people are a combination of extraversion and introversion, with personality psychologist Hans Eysenck suggesting a model by which differences in their brains produce these traits.
Sample items
I am the life of the party.
I feel comfortable around people.
I start conversations.
I talk to a lot of different people at parties.
I do not mind being the center of attention.
I do not talk a lot. (Reversed)
I keep in the background. (Reversed)
I have little to say. (Reversed)
I do not like to draw attention to myself. (Reversed)
I am quiet around strangers. (Reversed)
Agreeableness
Agreeableness is the general concern for social harmony. Agreeable individuals value getting along with others. They are generally considerate, kind, generous, trusting and trustworthy, helpful, and willing to compromise their interests with others. Agreeable people also have an optimistic view of human nature. Being agreeable helps us cope with stress.
Disagreeable individuals place self-interest above getting along with others. They are generally unconcerned with others' well-being and are less likely to extend themselves for other people. Sometimes their skepticism about others' motives causes them to be suspicious, unfriendly, and uncooperative. Disagreeable people are often competitive or challenging, which can be seen as argumentative or untrustworthy.
Because agreeableness is a social trait, research has shown that one's agreeableness positively correlates with the quality of relationships with one's team members. Agreeableness also positively predicts transformational leadership skills. In a study conducted among 169 participants in leadership positions in a variety of professions, individuals were asked to take a personality test and be directly evaluated by supervised subordinates. Very agreeable leaders were more likely to be considered transformational rather than transactional. Although the relationship was not strong (r=0.32, β=0.28, p<0.01), it was the strongest of the Big Five traits. However, the same study could not predict leadership effectiveness as evaluated by the leader's direct supervisor.
Conversely, agreeableness has been found to be negatively related to transactional leadership in the military. A study of Asian military units showed that agreeable people are more likely to be poor transactional leaders. Therefore, with further research, organisations may be able to determine an individual's potential for performance based on their personality traits. For instance, in their journal article "Which Personality Attributes Are Most Important in the Workplace?" Paul Sackett and Philip Walmsley claim that conscientiousness and agreeableness are "important to success across many different jobs."
Sample items
I am interested in people.
I sympathise with others' feelings.
I have a soft heart.
I take time out for others.
I feel others' emotions.
I make people feel at ease.
I am not really interested in others. (Reversed)
I insult people. (Reversed)
I am not interested in other people's problems. (Reversed)
I feel little concern for others. (Reversed)
Neuroticism
Neuroticism is the tendency to have strong negative emotions, such as anger, anxiety, or depression. It is sometimes called emotional instability, or is reversed and referred to as emotional stability. According to Hans Eysenck's (1967) theory of personality, neuroticism is associated with low tolerance for stress or a strong dislike of change. Neuroticism is a classic temperament trait that has been studied in temperament research for decades, even before it was adapted by the Five Factor Model.
Neurotic people are emotionally reactive and vulnerable to stress. They are more likely to interpret ordinary situations as threatening. They can perceive minor frustrations as hopelessly difficult. Their negative emotional reactions tend to stay for unusually long periods of time, which means they are often in a bad mood. For instance, neuroticism is connected to pessimism toward work, to certainty that work hinders personal relationships, and to higher levels of anxiety from the pressures at work. Furthermore, neurotic people may display more skin-conductance reactivity than calm and composed people. These problems in emotional regulation can make a neurotic person think less clearly, make worse decisions, and cope less effectively with stress. Being disappointed with one's life achievements can make one more neurotic and increase one's chances of falling into clinical depression. Moreover, neurotic individuals tend to experience more negative life events, but neuroticism also changes in response to positive and negative life experiences. Also, neurotic people tend to have worse psychological well-being.
At the other end of the scale, less neurotic individuals are less easily upset and are less emotionally reactive. They tend to be calm, emotionally stable, and free from persistent negative feelings. Freedom from negative feelings does not mean that low scorers experience a lot of positive feelings; that is related to extraversion instead.
Neuroticism is similar but not identical to being neurotic in the Freudian sense (i.e., neurosis). Some psychologists prefer to call neuroticism by the term emotional instability to differentiate it from the term neurotic in a career test.
Sample items
I get stressed out easily.
I worry about things.
I am easily disturbed.
I get upset easily.
I change my mood a lot.
I have frequent mood swings.
I get irritated easily.
I often feel blue.
I am relaxed most of the time. (Reversed)
I seldom feel blue. (Reversed)
Biological and developmental factors
The factors that influence a personality are called the determinants of personality. These factors determine the traits which a person develops in the course of development from a child.
Temperament and personality
There are debates between temperament researchers and personality researchers as to whether or not biologically based differences define a concept of temperament or a part of personality. The presence of such differences in pre-cultural individuals (such as animals or young infants) suggests that they belong to temperament since personality is a socio-cultural concept. For this reason developmental psychologists generally interpret individual differences in children as an expression of temperament rather than personality. Some researchers argue that temperaments and personality traits are age-specific demonstrations of virtually the same internal qualities. Some believe that early childhood temperaments may become adolescent and adult personality traits as individuals' basic genetic characteristics interact with their changing environments to various degrees.
Researchers of adult temperament point out that, similarly to sex, age, and mental illness, temperament is based on biochemical systems whereas personality is a product of socialisation of an individual possessing these four types of features. Temperament interacts with socio-cultural factors, but, similar to sex and age, still cannot be controlled or easily changed by these factors.
Therefore, it is suggested that temperament (neurochemically based individual differences) should be kept as an independent concept for further studies and not be confused with personality (culturally-based individual differences, reflected in the origin of the word "persona" (Lat) as a "social mask").
Moreover, temperament refers to dynamic features of behaviour (energetic, tempo, sensitivity, and emotionality-related), whereas personality is to be considered a psycho-social construct comprising the content characteristics of human behaviour (such as values, attitudes, habits, preferences, personal history, self-image). Temperament researchers point out that the lack of attention to surviving temperament research by the creators of the Big Five model led to an overlap between its dimensions and dimensions described in multiple temperament models much earlier. For example, neuroticism reflects the traditional temperament dimension of emotionality studied by Jerome Kagan's group since the '60s. Extraversion was also first introduced as a temperament type by Jung from the '20s.
Heritability
A 1996 behavioural genetics study of twins suggested that heritability (the degree of variation in a trait within a population that is due to genetic variation in that population) and environmental factors both influence all five factors to the same degree. Among four twin studies examined in 2003, the mean percentage for heritability was calculated for each personality and it was concluded that heritability influenced the five factors broadly. The self-report measures were as follows: openness to experience was estimated to have a 57% genetic influence, extraversion 54%, conscientiousness 49%, neuroticism 48%, and agreeableness 42%.
Non-humans
The Big Five personality traits have been assessed in some non-human species but methodology is debatable. In one series of studies, human ratings of chimpanzees using the Hominoid Personality Questionnaire, revealed factors of extraversion, conscientiousness and agreeableness– as well as an additional factor of dominance–across hundreds of chimpanzees in zoological parks, a large naturalistic sanctuary, and a research laboratory. Neuroticism and openness factors were found in an original zoo sample, but were not replicated in a new zoo sample or in other settings (perhaps reflecting the design of the CPQ). A study review found that markers for the three dimensions extraversion, neuroticism, and agreeableness were found most consistently across different species, followed by openness; only chimpanzees showed markers for conscientious behavior.
A study completed in 2020 concluded that dolphins have some similar personality traits to humans. Both are large brained intelligent animals but have evolved separately for millions of years.
Development during childhood and adolescence
Research on the Big Five, and personality in general, has focused primarily on individual differences in adulthood, rather than in childhood and adolescence, and often include temperament traits. Recently, there has been growing recognition of the need to study child and adolescent personality trait development in order to understand how traits develop and change throughout the lifespan.
Recent studies have begun to explore the developmental origins and trajectories of the Big Five among children and adolescents, especially those that relate to temperament. Many researchers have sought to distinguish between personality and temperament. Temperament often refers to early behavioral and affective characteristics that are thought to be driven primarily by genes. Models of temperament often include four trait dimensions: surgency/sociability, negative emotionality, persistence/effortful control, and activity level. Some of these differences in temperament are evident at, if not before, birth. For example, both parents and researchers recognize that some newborn infants are peaceful and easily soothed while others are comparatively fussy and hard to calm. Unlike temperament, however, many researchers view the development of personality as gradually occurring throughout childhood. Contrary to some researchers who question whether children have stable personality traits, Big Five or otherwise, most researchers contend that there are significant psychological differences between children that are associated with relatively stable, distinct, and salient behavior patterns.
The structure, manifestations, and development of the Big Five in childhood and adolescence have been studied using a variety of methods, including parent- and teacher-ratings, preadolescent and adolescent self- and peer-ratings, and observations of parent-child interactions. Results from these studies support the relative stability of personality traits across the human lifespan, at least from preschool age through adulthood. More specifically, research suggests that four of the Big Five – namely Extraversion, Neuroticism, Conscientiousness, and Agreeableness – reliably describe personality differences in childhood, adolescence, and adulthood. However, some evidence suggests that Openness may not be a fundamental, stable part of childhood personality. Although some researchers have found that Openness in children and adolescents relates to attributes such as creativity, curiosity, imagination, and intellect, many researchers have failed to find distinct individual differences in Openness in childhood and early adolescence. Potentially, Openness may (a) manifest in unique, currently unknown ways in childhood or (b) may only manifest as children develop socially and cognitively. Other studies have found evidence for all of the Big Five traits in childhood and adolescence as well as two other child-specific traits: Irritability and Activity. Despite these specific differences, the majority of findings suggest that personality traits – particularly Extraversion, Neuroticism, Conscientiousness, and Agreeableness – are evident in childhood and adolescence and are associated with distinct social-emotional patterns of behavior that are largely consistent with adult manifestations of those same personality traits. Some researchers have proposed the youth personality trait is best described by six trait dimensions: neuroticism, extraversion, openness to experience, agreeableness, conscientiousness, and activity. Despite some preliminary evidence for this "Little Six" model, research in this area has been delayed by a lack of available measures.
Previous research has found evidence that most adults become more agreeable and conscientious and less neurotic as they age. This has been referred to as the maturation effect. Many researchers have sought to investigate how trends in adult personality development compare to trends in youth personality development. Two main population-level indices have been important in this area of research: rank-order consistency and mean-level consistency. Rank-order consistency indicates the relative placement of individuals within a group. Mean-level consistency indicates whether groups increase or decrease on certain traits throughout the lifetime.
Findings from these studies indicate that, consistent with adult personality trends, youth personality becomes increasingly more stable in terms of rank-order throughout childhood. Unlike adult personality research, which indicates that people become agreeable, conscientious, and emotionally stable with age, some findings in youth personality research have indicated that mean levels of agreeableness, conscientiousness, and openness to experience decline from late childhood to late adolescence. The disruption hypothesis, which proposes that biological, social, and psychological changes experienced during youth result in temporary dips in maturity, has been proposed to explain these findings.
Extraversion/positive emotionality
In Big Five studies, extraversion has been associated with surgency. Children with high Extraversion are energetic, talkative, social, and dominant with children and adults, whereas children with low extraversion tend to be quiet, calm, inhibited, and submissive to other children and adults. Individual differences in extraversion first manifest in infancy as varying levels of positive emotionality. These differences in turn predict social and physical activity during later childhood and may represent, or be associated with, the behavioral activation system. In children, Extraversion/Positive Emotionality includes four sub-traits: three of these (activity, sociability, and shyness) are similar to the previously described traits of temperament; the other is dominance.
Activity: Similarly to findings in temperament research, children with high activity tend to have high energy levels and more intense and frequent motor activity compared to their peers. Salient differences in activity reliably manifest in infancy, persist through adolescence, and fade as motor activity decreases in adulthood or potentially develops into talkativeness.
Dominance: Children with high dominance tend to influence the behavior of others, particularly their peers, to obtain desirable rewards or outcomes. Such children are generally skilled at organizing activities and games and deceiving others by controlling their nonverbal behavior.
Shyness: Children with high shyness are generally socially withdrawn, nervous, and inhibited around strangers. In time, such children may become fearful even around "known others", especially if their peers reject them. Similar pattern was described in temperament longitudinal studies of shyness
Sociability: Children with high sociability generally prefer to be with others rather than alone. During middle childhood, the distinction between low sociability and high shyness becomes more pronounced, particularly as children gain greater control over how and where they spend their time.
Development throughout adulthood
Many studies of longitudinal data, which correlate people's test scores over time, and cross-sectional data, which compare personality levels across different age groups, show a high degree of stability in personality traits during adulthood, especially Neuroticism that is often regarded as a temperament trait similarly to longitudinal research in temperament for the same traits. It is shown that the personality stabilizes for working-age individuals within about four years after starting working. There is also little evidence that adverse life events can have any significant impact on the personality of individuals. More recent research and meta-analyses of previous studies, however, indicate that change occurs in all five traits at various points in the lifespan. The new research shows evidence for a maturation effect. On average, levels of agreeableness and conscientiousness typically increase with time, whereas extraversion, neuroticism, and openness tend to decrease. Research has also demonstrated that changes in Big Five personality traits depend on the individual's current stage of development. For example, levels of agreeableness and conscientiousness demonstrate a negative trend during childhood and early adolescence before trending upwards during late adolescence and into adulthood. In addition to these group effects, there are individual differences: different people demonstrate unique patterns of change at all stages of life.
In addition, some research (Fleeson, 2001) suggests that the Big Five should not be conceived of as dichotomies (such as extraversion vs. introversion) but as continua. Each individual has the capacity to move along each dimension as circumstances (social or temporal) change. He is or she is therefore not simply on one end of each trait dichotomy but is a blend of both, exhibiting some characteristics more often than others:
Research regarding personality with growing age has suggested that as individuals enter their elder years (79–86), those with lower IQ see a raise in extraversion, but a decline in conscientiousness and physical well-being.
Group differences
Gender differences
Some cross-cultural research has shown some patterns of gender differences on responses to the NEO-PI-R and the Big Five Inventory. For example, women consistently report higher Neuroticism, Agreeableness, warmth (an extraversion facet) and openness to feelings, and men often report higher assertiveness (a facet of extraversion) and openness to ideas as assessed by the NEO-PI-R.
A study of gender differences in 55 nations using the Big Five Inventory found that women tended to be somewhat higher than men in neuroticism, extraversion, agreeableness, and conscientiousness. The difference in neuroticism was the most prominent and consistent, with significant differences found in 49 of the 55 nations surveyed.
Gender differences in personality traits are largest in prosperous, healthy, and more gender-egalitarian nations. The explanation for this, as stated by the researchers of a 2001 paper, is that actions by women in individualistic, egalitarian countries are more likely to be attributed to their personality, rather than being attributed to ascribed gender roles within collectivist, traditional countries.
Measured differences in the magnitude of sex differences between more or less developed world regions were caused by the changes in the measured personalities of men, not women, in these respective regions. That is, men in highly developed world regions were less neurotic, less extraverted, less conscientious and less agreeable compared to men in less developed world regions. Women, on the other hand tended not to differ in personality traits across regions.
Birth-order differences
Frank Sulloway argues that firstborns are more conscientious, more socially dominant, less agreeable, and less open to new ideas compared to siblings that were born later. Large-scale studies using random samples and self-report personality tests, however, have found milder effects than Sulloway claimed, or no significant effects of birth order on personality. A study using the Project Talent data, which is a large-scale representative survey of American high school students, with 272,003 eligible participants, found statistically significant but very small effects (the average absolute correlation between birth order and personality was .02) of birth order on personality, such that firstborns were slightly more conscientious, dominant, and agreeable, while also being less neurotic and less sociable. Parental socioeconomic status and participant gender had much larger correlations with personality.
In 2002, the Journal of Psychology posted a Big Five Personality Trait Difference; where researchers explored the relationship between the five-factor model and the Universal-Diverse Orientation (UDO) in counselor trainees. (Thompson, R., Brossart, D., and Mivielle, A., 2002). UDO is known as one social attitude that produces a strong awareness and/or acceptance towards the similarities and differences among individuals. (Miville, M., Romas, J., Johnson, J., and Lon, R. 2002) The study found that the counselor trainees that are more open to the idea of creative expression (a facet of Openness to Experience, Openness to Aesthetics) among individuals are more likely to work with a diverse group of clients, and feel comfortable in their role.
Cultural differences
Individual differences in personality traits are widely understood to be conditioned by cultural context.
Research into the Big Five has been pursued in a variety of languages and cultures, such as German, Chinese, and South Asian. For example, Thompson has claimed to find the Big Five structure across several cultures using an international English language scale.
Cheung, van de Vijver, and Leong (2011) suggest, however, that the Openness factor is particularly unsupported in Asian countries and that a different fifth factor is identified.
Sopagna Eap et al. (2008) found that European-American men scored higher than Asian-American men on extroversion, conscientiousness, and openness, while Asian-American men scored higher than European-American men on neuroticism. Benet-Martínez and Karakitapoglu-Aygün (2003) arrived at similar results.
Recent work has found relationships between Geert Hofstede's cultural factors, Individualism, Power Distance, Masculinity, and Uncertainty Avoidance, with the average Big Five scores in a country. For instance, the degree to which a country values individualism correlates with its average extraversion, whereas people living in cultures which are accepting of large inequalities in their power structures tend to score somewhat higher on conscientiousness.
A 2017 study has found that countries' average personality trait levels are correlated with their political systems. Countries with higher average trait Openness tended to have more democratic institutions, an association that held even after factoring out other relevant influences such as economic development.
Attempts to replicate the Big Five have succeeded in some countries but not in others. Some research suggests, for instance, that Hungarians do not have a single agreeableness factor. Other researchers have found evidence for agreeableness but not for other factors.
Health
Personality and dementia
Some diseases cause changes in personality. For example, although gradual memory impairment is the hallmark feature of Alzheimer's disease, a systematic review of personality changes in Alzheimer's disease by Robins Wahlin and Byrne, published in 2011, found systematic and consistent trait changes mapped to the Big Five. The largest change observed was a decrease in conscientiousness. The next most significant changes were an increase in Neuroticism and decrease in Extraversion, but Openness and Agreeableness were also decreased. These changes in personality could assist with early diagnosis.
A study published in 2023 found that the Big Five personality traits may also influence the quality of life experienced by people with Alzheimer's disease and other dementias, post diagnosis. In this study people with dementia with lower levels of Neuroticism self-reported higher quality of life than those with higher levels of Neuroticism while those with higher levels of the other four traits self-reported higher quality of life than those with lower levels of these traits. This suggests that as well as assisting with early diagnosis, the Big Five personality traits could help identify people with dementia potentially more vulnerable to adverse outcomes and inform personalized care planning and interventions.
Personality disorders
, there were over fifty published studies relating the FFM to personality disorders. Since that time, quite a number of additional studies have expanded on this research base and provided further empirical support for understanding the DSM personality disorders in terms of the FFM domains.
In her review of the personality disorder literature published in 2007, Lee Anna Clark asserted that "the five-factor model of personality is widely accepted as representing the higher-order structure of both normal and abnormal personality traits". However, other researchers disagree that this model is widely accepted (see the section Critique below) and suggest that it simply replicates early temperament research. Noticeably, FFM publications never compare their findings to temperament models even though temperament and mental disorders (especially personality disorders) are thought to be based on the same neurotransmitter imbalances, just to varying degrees.
The five-factor model was claimed to significantly predict all ten personality disorder symptoms and outperform the Minnesota Multiphasic Personality Inventory (MMPI) in the prediction of borderline, avoidant, and dependent personality disorder symptoms. However, most predictions related to an increase in Neuroticism and a decrease in Agreeableness, and therefore did not differentiate between the disorders very well.
Common mental disorders
Converging evidence from several nationally representative studies has established three classes of mental disorders which are especially common in the general population: Depressive disorders (e.g., major depressive disorder (MDD), dysthymic disorder), anxiety disorders (e.g., generalized anxiety disorder (GAD), post-traumatic stress disorder (PTSD), panic disorder, agoraphobia, specific phobia, and social phobia), and substance use disorders (SUDs). The Five Factor personality profiles of users of different drugs may be different. For example, the typical profile for heroin users is , whereas for ecstasy users the high level of N is not expected but E is higher: .
These common mental disorders (CMDs) have been empirically linked to the Big Five personality traits, neuroticism in particular. Numerous studies have found that having high scores of neuroticism significantly increases one's risk for developing a common mental disorder. A large-scale meta-analysis (n > 75,000) examining the relationship between all of the Big Five personality traits and common mental disorders found that low conscientiousness yielded consistently strong effects for each common mental disorder examined (i.e., MDD, dysthymic disorder, GAD, PTSD, panic disorder, agoraphobia, social phobia, specific phobia, and SUD). This finding parallels research on physical health, which has established that conscientiousness is the strongest personality predictor of reduced mortality, and is highly negatively correlated with making poor health choices. In regards to the other personality domains, the meta-analysis found that all common mental disorders examined were defined by high neuroticism, most exhibited low extraversion, only SUD was linked to agreeableness (negatively), and no disorders were associated with Openness. A meta-analysis of 59 longitudinal studies showed that high neuroticism predicted the development of anxiety, depression, substance abuse, psychosis, schizophrenia, and non-specific mental distress, also after adjustment for baseline symptoms and psychiatric history.
The personality-psychopathology models
Five major models have been posed to explain the nature of the relationship between personality and mental illness. There is currently no single "best model", as each of them has received at least some empirical support. These models are not mutually exclusive – more than one may be operating for a particular individual and various mental disorders may be explained by different models.
The Vulnerability/Risk Model: According to this model, personality contributes to the onset or etiology of various common mental disorders. In other words, pre-existing personality traits either cause the development of CMDs directly or enhance the impact of causal risk factors. There is strong support for neuroticism being a robust vulnerability factor.
The Pathoplasty Model: This model proposes that premorbid personality traits impact the expression, course, severity, and/or treatment response of a mental disorder. An example of this relationship would be a heightened likelihood of committing suicide in a depressed individual who also has low levels of constraint.
The Common Cause Model: According to the common cause model, personality traits are predictive of CMDs because personality and psychopathology have shared genetic and environmental determinants which result in non-causal associations between the two constructs.
The Spectrum Model: This model proposes that associations between personality and psychopathology are found because these two constructs both occupy a single domain or spectrum and psychopathology is simply a display of the extremes of normal personality function. Support for this model is provided by an issue of criterion overlap. For instance, two of the primary facet scales of neuroticism in the NEO-PI-R are "depression" and "anxiety". Thus the fact that diagnostic criteria for depression, anxiety, and neuroticism assess the same content increases the correlations between these domains.
The Scar Model: According to the scar model, episodes of a mental disorder 'scar' an individual's personality, changing it in significant ways from premorbid functioning. An example of a scar effect would be a decrease in openness to experience following an episode of PTSD.
Physical health
To examine how the Big Five personality traits are related to subjective health outcomes (positive and negative mood, physical symptoms, and general health concern) and objective health conditions (chronic illness, serious illness, and physical injuries), Jasna Hudek-Knezevic and Igor Kardum conducted a study from a sample of 822 healthy volunteers (438 women and 384 men). Out of the Big Five personality traits, they found neuroticism most related to worse subjective health outcomes and optimistic control to better subjective health outcomes. When relating to objective health conditions, connections drawn were presented weak, except that neuroticism significantly predicted chronic illness, whereas optimistic control was more closely related to physical injuries caused by accident.
Being highly conscientious may add as much as five years to one's life. The Big Five personality traits also predict positive health outcomes. In an elderly Japanese sample, conscientiousness, extraversion, and openness were related to lower risk of mortality.
Higher conscientiousness is associated with lower obesity risk. In already obese individuals, higher conscientiousness is associated with a higher likelihood of becoming non-obese over a five-year period.
Effect of personality traits through life
Education
Academic achievement
Personality plays an important role in academic achievement. A study of 308 undergraduates who completed the Five Factor Inventory Processes and reported their GPA suggested that conscientiousness and agreeableness have a positive relationship with all types of learning styles (synthesis-analysis, methodical study, fact retention, and elaborative processing), whereas neuroticism shows an inverse relationship. Moreover, extraversion and openness were proportional to elaborative processing. The Big Five personality traits accounted for 14% of the variance in GPA, suggesting that personality traits make some contributions to academic performance. Furthermore, reflective learning styles (synthesis-analysis and elaborative processing) were able to mediate the relationship between openness and GPA. These results indicate that intellectual curiosity significantly enhances academic performance if students combine their scholarly interest with thoughtful information processing.
A recent study of Israeli high-school students found that those in the gifted program systematically scored higher on openness and lower on neuroticism than those not in the gifted program. While not a measure of the Big Five, gifted students also reported less state anxiety than students not in the gifted program. Specific Big Five personality traits predict learning styles in addition to academic success.
GPA and exam performance are both predicted by conscientiousness
neuroticism is negatively related to academic success
openness predicts utilizing synthesis-analysis and elaborative-processing learning styles
neuroticism negatively correlates with learning styles in general
openness and extraversion both predict all four learning styles.
Studies conducted on college students have concluded that hope, which is linked to agreeableness, conscientiousness, neuroticism, and openness, has a positive effect on psychological well-being. Individuals high in neurotic tendencies are less likely to display hopeful tendencies and are negatively associated with well-being. Personality can sometimes be flexible and measuring the big five personality for individuals as they enter certain stages of life may predict their educational identity. Recent studies have suggested the likelihood of an individual's personality affecting their educational identity.
Learning styles
Learning styles have been described as "enduring ways of thinking and processing information".
In 2008, the Association for Psychological Science (APS) commissioned a report that concludes that no significant evidence exists that learning-style assessments should be included in the education system. Thus it is premature, at best, to conclude that the evidence links the Big Five to "learning styles", or "learning styles" to learning itself.
However, the APS report also suggested that all existing learning styles have not been exhausted and that there could exist learning styles worthy of being included in educational practices. There are studies that conclude that personality and thinking styles may be intertwined in ways that link thinking styles to the Big Five personality traits. There is no general consensus on the number or specifications of particular learning styles, but there have been many different proposals.
As one example, Schmeck, Ribich, and Ramanaiah (1997) defined four types of learning styles:
synthesis analysis
methodical study
fact retention
elaborative processing
When all four facets are implicated within the classroom, they will each likely improve academic achievement. This model asserts that students develop either agentic/shallow processing or reflective/deep processing. Deep processors are more often found to be more conscientious, intellectually open, and extraverted than shallow processors. Deep processing is associated with appropriate study methods (methodical study) and a stronger ability to analyze information (synthesis analysis), whereas shallow processors prefer structured fact retention learning styles and are better suited for elaborative processing. The main functions of these four specific learning styles are as follows:
Openness has been linked to learning styles that often lead to academic success and higher grades like synthesis analysis and methodical study. Because conscientiousness and openness have been shown to predict all four learning styles, it suggests that individuals who possess characteristics like discipline, determination, and curiosity are more likely to engage in all of the above learning styles.
According to the research carried out by Komarraju, Karau, Schmeck & Avdic (2011), conscientiousness and agreeableness are positively related with all four learning styles, whereas neuroticism was negatively related with those four. Furthermore, extraversion and openness were only positively related to elaborative processing, and openness itself correlated with higher academic achievement.
In addition, a previous study by psychologist Mikael Jensen has shown relationships between the Big Five personality traits, learning, and academic achievement. According to Jensen, all personality traits, except neuroticism, are associated with learning goals and motivation. Openness and conscientiousness influence individuals to learn to a high degree unrecognized, while extraversion and agreeableness have similar effects. Conscientiousness and neuroticism also influence individuals to perform well in front of others for a sense of credit and reward, while agreeableness forces individuals to avoid this strategy of learning. Jensen's study concludes that individuals who score high on the agreeableness trait will likely learn just to perform well in front of others.
Besides openness, all Big Five personality traits helped predict the educational identity of students. Based on these findings, scientists are beginning to see that the Big Five traits might have a large influence of on academic motivation that leads to predicting a student's academic performance.
Some authors suggested that Big Five personality traits combined with learning styles can help predict some variations in the academic performance and the academic motivation of an individual which can then influence their academic achievements. This may be seen because individual differences in personality represent stable approaches to information processing. For instance, conscientiousness has consistently emerged as a stable predictor of success in exam performance, largely because conscientious students experience fewer study delays. Conscientiousness shows a positive association with the four learning styles because students with high levels of conscientiousness develop focused learning strategies and appear to be more disciplined and achievement-oriented.
Distance Learning
When the relationship between the five-factor personality traits and academic achievement in distance education settings was examined in brief, the openness personality trait was found to be the most important variable that has a positive relationship with academic achievement in distance education environments. In addition, it was found that self-discipline, extraversion, and adaptability personality traits are generally in a positive relationship with academic achievement. The most important personality trait that has a negative relationship with academic achievement has emerged as neuroticism. The results generally show that individuals who are organized, planned, determined, who are oriented to new ideas and independent thinking have increased success in distance education environments. On the other hand, it can be said that individuals with anxiety and stress tendencies generally have lower academic success.
Employment
Occupation and personality fit
Researchers have long suggested that work is more likely to be fulfilling to the individual and beneficial to society when there is alignment between the person and their occupation. For instance, software programmers and scientists often rank high on Openness to experience and tend to be intellectually curious, think in symbols and abstractions, and find repetition boring. Psychologists and sociologists rank higher on Agreeableness and Openness than economists and jurists.
Work success
It is believed that the Big Five traits are predictors of future performance outcomes to varying degrees. Specific facets of the Big Five traits are also thought to be indicators of success in the workplace, and each individual facet can give a more precise indication as to the nature of a person. Different traits' facets are needed for different occupations. Various facets of the Big Five traits can predict the success of people in different environments. The estimated levels of an individual's success in jobs that require public speaking versus one-on-one interactions will differ according to whether that person has particular traits' facets.
Job outcome measures include job and training proficiency and personnel data. However, research demonstrating such prediction has been criticized, in part because of the apparently low correlation coefficients characterizing the relationship between personality and job performance. In a 2007 article states: "The problem with personality tests is ... that the validity of personality measures as predictors of job performance is often disappointingly low. The argument for using personality tests to predict performance does not strike me as convincing in the first place."
Such criticisms were put forward by Walter Mischel, whose publication caused a two-decades' long crisis in personality psychometrics. However, later work demonstrated that the correlations obtained by psychometric personality researchers were actually very respectable by comparative standards, and that the economic value of even incremental increases in prediction accuracy was exceptionally large, given the vast difference in performance by those who occupy complex job positions.
Research has suggested that individuals who are considered leaders typically exhibit lower amounts of neurotic traits, maintain higher levels of openness, balanced levels of conscientiousness, and balanced levels of extraversion. Further studies have linked professional burnout to neuroticism, and extraversion to enduring positive work experience. Studies have linked national innovation, leadership, and ideation to openness to experience and conscientiousness. Occupational self-efficacy has also been shown to be positively correlated with conscientiousness and negatively correlated with neuroticism. Some research has also suggested that the conscientiousness of a supervisor is positively associated with an employee's perception of abusive supervision. Others have suggested that low agreeableness and high neuroticism are traits more related to abusive supervision.
Openness is positively related to proactivity at the individual and the organizational levels and is negatively related to team and organizational proficiency. These effects were found to be completely independent of one another. This is also counter-conscientious and has a negative correlation to Conscientiousness.
Agreeableness is negatively related to individual task proactivity. Typically this is associated with lower career success and being less able to cope with conflict. However there are benefits to the Agreeableness personality trait including higher subjective well-being; more positive interpersonal interactions and helping behavior; lower conflict; lower deviance and turnover. Furthermore, attributes related to Agreeableness are important for workforce readiness for a variety of occupations and performance criteria. Research has suggested that those who are high in agreeableness are not as successful in accumulating income.
Extraversion results in greater leadership emergence and effectiveness; as well as higher job and life satisfaction. However extraversion can lead to more impulsive behaviors, more accidents and lower performance in certain jobs.
Conscientiousness is highly predictive of job performance in general, and is positively related to all forms of work role performance, including job performance and job satisfaction, greater leadership effectiveness, lower turnover and deviant behaviors. However this personality trait is associated with reduced adaptability, lower learning in initial stages of skill acquisition and more interpersonally abrasiveness, when also low in agreeableness.
Neuroticism is negatively related to all forms of work role performance. This increases the chance of engaging in risky behaviors.
Two theories have been integrated in an attempt to account for these differences in work role performance. Trait activation theory posits that within a person trait levels predict future behavior, that trait levels differ between people, and that work-related cues activate traits which leads to work relevant behaviors. Role theory suggests that role senders provide cues to elicit desired behaviors. In this context, role senders provide workers with cues for expected behaviors, which in turn activates personality traits and work relevant behaviors. In essence, expectations of the role sender lead to different behavioral outcomes depending on the trait levels of individual workers, and because people differ in trait levels, responses to these cues will not be universal.
Romantic relationships
The Big Five model of personality was used for attempts to predict satisfaction in romantic relationships, relationship quality in dating, engaged, and married couples.
Political identification
The Big Five Personality Model also has applications in the study of political psychology. Studies have been finding links between the big five personality traits and political identification. It has been found by several studies that individuals who score high in Conscientiousness are more likely to possess a right-wing political identification. On the opposite end of the spectrum, a strong correlation was identified between high scores in Openness to Experience and a left-leaning ideology. While the traits of agreeableness, extraversion, and neuroticism have not been consistently linked to either conservative or liberal ideology, with studies producing mixed results, such traits are promising when analyzing the strength of an individual's party identification. However, correlations between the Big Five and political beliefs, while present, tend to be small, with one study finding correlations ranged from 0.14 to 0.24.
Scope of predictive power
The predictive effects of the Big Five personality traits relate mostly to social functioning and rules-driven behavior and are not very specific for prediction of particular aspects of behavior. For example, it was noted by all temperament researchers that high neuroticism precedes the development of all common mental disorders and is not associated with personality. Further evidence is required to fully uncover the nature and differences between personality traits, temperament and life outcomes. Social and contextual parameters also play a role in outcomes and the interaction between the two is not yet fully understood.
Religiosity
Though the effect sizes are small: Of the Big Five personality traits high Agreeableness, Conscientiousness and Extraversion relate to general religiosity, while Openness relate negatively to religious fundamentalism and positively to spirituality. High Neuroticism may be related to extrinsic religiosity, whereas intrinsic religiosity and spirituality reflect Emotional Stability.
Measurements
Several measures of the Big Five exist:
International Personality Item Pool (IPIP)
NEO-PI-R
The Ten-Item Personality Inventory (TIPI) and the Five Item Personality Inventory (FIPI) are very abbreviated rating forms of the Big Five personality traits.
Self-descriptive sentence questionnaires
Lexical questionnaires
Self-report questionnaires
Relative-scored Big 5 measure
The most frequently used measures of the Big Five comprise either items that are self-descriptive sentences or, in the case of lexical measures, items that are single adjectives. Due to the length of sentence-based and some lexical measures, short forms have been developed and validated for use in applied research settings where questionnaire space and respondent time are limited, such as the 40-item balanced International English Big-Five Mini-Markers or a very brief (10 item) measure of the Big Five domains. Research has suggested that some methodologies in administering personality tests are inadequate in length and provide insufficient detail to truly evaluate personality. Usually, longer, more detailed questions will give a more accurate portrayal of personality. The five factor structure has been replicated in peer reports. However, many of the substantive findings rely on self-reports.
Much of the evidence on the measures of the Big 5 relies on self-report questionnaires, which makes self-report bias and falsification of responses difficult to deal with and account for. It has been argued that the Big Five tests do not create an accurate personality profile because the responses given on these tests are not true in all cases and can be falsified. For example, questionnaires are answered by potential employees who might choose answers that paint them in the best light.
Research suggests that a relative-scored Big Five measure in which respondents had to make repeated choices between equally desirable personality descriptors may be a potential alternative to traditional Big Five measures in accurately assessing personality traits, especially when lying or biased responding is present. When compared with a traditional Big Five measure for its ability to predict GPA and creative achievement under both normal and "fake good"-bias response conditions, the relative-scored measure significantly and consistently predicted these outcomes under both conditions; however, the Likert questionnaire lost its predictive ability in the faking condition. Thus, the relative-scored measure proved to be less affected by biased responding than the Likert measure of the Big Five.
Andrew H. Schwartz analyzed 700 million words, phrases, and topic instances collected from the Facebook messages of 75,000 volunteers, who also took standard personality tests, and found striking variations in language with personality, gender, and age.
Critique
The proposed Big Five model has been subjected to considerable critical scrutiny in a number of published studies. One prominent critic of the model has been Jack Block at the University of California, Berkeley. In response to Block, the model was defended in a paper published by Costa and McCrae. This was followed by a number of published critical replies from Block.
It has been argued that there are limitations to the scope of the Big Five model as an explanatory or predictive theory. It has also been argued that measures of the Big Five account for only 56% of the normal personality trait sphere alone (not even considering the abnormal personality trait sphere). Also, the static Big Five is not theory driven, it is merely a statistically driven investigation of certain descriptors that tend to cluster together often based on less-than-optimal factor analytic procedures. Measures of the Big Five constructs appear to show some consistency in interviews, self-descriptions and observations, and this static five-factor structure seems to be found across a wide range of participants of different ages and cultures. However, while genotypic temperament trait dimensions might appear across different cultures, the phenotypic expression of personality traits differs profoundly across different cultures as a function of the different socio-cultural conditioning and experiential learning that takes place within different cultural settings.
Moreover, the fact that the Big Five model was based on lexical hypothesis (i.e. on the verbal descriptors of individual differences) indicated strong methodological flaws in this model, especially related to its main factors, Extraversion and Neuroticism. First, there is a natural pro-social bias of language in people's verbal evaluations. After all, language is an invention of group dynamics that was developed to facilitate socialization and the exchange of information and to synchronize group activity. This social function of language therefore creates a sociability bias in verbal descriptors of human behavior: there are more words related to social than physical or even mental aspects of behavior. The sheer number of such descriptors will cause them to group into the largest factor in any language, and such grouping has nothing to do with the way that core systems of individual differences are set up. Second, there is also a negativity bias in emotionality (i.e. most emotions have negative affectivity), and there are more words in language to describe negative rather than positive emotions. Such asymmetry in emotional valence creates another bias in language. Experiments using the lexical hypothesis approach indeed demonstrated that the use of lexical material skews the resulting dimensionality according to a sociability bias of language and a negativity bias of emotionality, grouping all evaluations around these two dimensions. This means that the two largest dimensions in the Big Five model might be just an artifact of the lexical approach that this model employed.
Limited scope
One common criticism is that the Big Five does not explain all of human personality. Some psychologists have dissented from the model precisely because they feel it neglects other domains of personality, such as religiosity, manipulativeness/machiavellianism, honesty, sexiness/seductiveness, thriftiness, conservativeness, masculinity/femininity, snobbishness/egotism, sense of humour, and risk-taking/thrill-seeking. Dan P. McAdams has called the Big Five a "psychology of the stranger", because they refer to traits that are relatively easy to observe in a stranger; other aspects of personality that are more privately held or more context-dependent are excluded from the Big Five. Block has pointed to several less-recognized but successful efforts to specify aspects of character not subsumed by the model.
There may be debate as to what counts as personality and what does not and the nature of the questions in the survey greatly influence outcome. Multiple particularly broad question databases have failed to produce the Big Five as the top five traits.
In many studies, the five factors are not fully orthogonal to one another; that is, the five factors are not independent. Orthogonality is viewed as desirable by some researchers because it minimizes redundancy between the dimensions. This is particularly important when the goal of a study is to provide a comprehensive description of personality with as few variables as possible.
The model is inappropriate for studying early childhood, as language is not yet developed.
Methodological issues
Factor analysis, the statistical method used to identify the dimensional structure of observed variables, lacks a universally recognized basis for choosing among solutions with different numbers of factors. A five factor solution depends on some degree of interpretation by the analyst. A larger number of factors may underlie these five factors. This has led to disputes about the "true" number of factors. Big Five proponents have responded that although other solutions may be viable in a single data set, only the five-factor structure consistently replicates across different studies. Block argues that the use of factor analysis as the exclusive paradigm for conceptualizing personality is too limited.
Surveys in studies are often online surveys of college students (compare WEIRD bias). Results do not always replicate when run on other populations or in other languages. It is not clear that different surveys measure the same 5 factors.
Moreover, the factor analysis that this model is based on is a linear method incapable of capturing nonlinear, feedback and contingent relationships between core systems of individual differences.
See also
Core self-evaluations
Dark triad
DISC assessment
Facet
Genomics of personality traits
Goal orientation
HEXACO model of personality structure
Moral foundations theory
Myers–Briggs Type Indicator
Personality psychology
Szondi test
Trait theory
References
External links
International Personality Item Pool, public domain list of items keyed to the big five personality traits.
Selection from the "Handbook of personality: Theory and research" for researchers
U.S. Regions Exhibit Distinct Personalities, Research Reveals
Personality traits
1961 introductions | 0.760404 | 0.999731 | 0.7602 |
Discovery learning | Discovery learning is a technique of inquiry-based learning and is considered a constructivist based approach to education. It is also referred to as problem-based learning, experiential learning and 21st century learning. It is supported by the work of learning theorists and psychologists Jean Piaget, Jerome Bruner, and Seymour Papert.
Jerome Bruner is often credited with originating discovery learning in the 1960s, but his ideas are very similar to those of earlier writers such as John Dewey. Bruner argues that "Practice in discovering for oneself teaches one to acquire information in a way that makes that information more readily viable in problem solving". This philosophy later became the discovery learning movement of the 1960s. The mantra of this philosophical movement suggests that people should "learn by doing".
The label of discovery learning can cover a variety of instructional techniques. According to a meta-analytic review conducted by Alfieri, Brooks, Aldrich, and Tenenbaum (2011), a discovery learning task can range from implicit pattern detection, to the elicitation of explanations and working through manuals to conducting simulations. Discovery learning can occur whenever the student is not provided with an exact answer but rather the materials in order to find the answer themselves.
Discovery learning takes place in problem solving situations where learners interact with their environment by exploring and manipulating objects, wrestling with questions and controversies, or performing experiments, while drawing on their own experience and prior knowledge.
Characteristics
Discovery-based learning is typically characterized by having minimal teacher guidance, fewer teacher explanations, solving problems with multiple solutions, use of hand-on materials, minimal repetition and memorization.
There are multiple essential components that are required for successful discovery-based learning which include the following:
Teacher guidance where the emphasis is on building upon students’ reasoning and connecting to their experiences
Classroom culture where there is a shared sense of purpose between teacher and students, where open-mindedness and dialogue are encouraged
Students are encouraged to ask questions, inquire through exploration and collaborate with teacher and peers
Teacher's role
It has been suggested that effective teaching using discovery techniques requires teachers to do one or more of the following: 1) Provide guided tasks leveraging a variety of instructional techniques 2) Students should explain their own ideas and teachers should assess the accuracy of the idea and provide feedback 3) Teachers should provide examples of how to complete the tasks.
A critical success factor to discovery learning is that it must be teacher assisted. Bruner (1961), one of the early pioneers of discovery learning, cautioned that discovery could not happen without some basic knowledge. Mayer (2004) argued that pure unassisted discovery should be eliminated due to the lack of evidence that it improves learning outcomes. Discovery learning can also result in students becoming confused and frustrated.
The teachers’ role in discovery learning is therefore critical to the success of learning outcomes. Students must build foundational knowledge through examples, practice and feedback. This can provide a foundation for students to integrate additional information and build upon problem solving and critical thinking skills.
Benefits and limitations
Early research demonstrated that directed discovery had positive effects on retention of information at six weeks after instruction versus that of traditional direct instruction.
It is believed that the outcome of discovery based learning is the development of inquiring minds and the potential for life-long learning.
Discovery learning promotes student exploration and collaboration with teachers and peers to solve problems. Children are also able to direct their own inquiry and be actively involved in the learning process which helps with student motivation.
Discovery learning is not without limitations, however. Some studies show that students in discovery situations are more likely than those receiving direct instruction to encounter inconsistent or misleading feedback, encoding errors, causal misattributions, and inadequate practice and elaborations. In these cases, direct instruction has been shown to be an efficient way to teach procedures that are difficult for students to discover on their own, such as those involved in geometry, algebra, and computer programming.
Assisted vs. unassisted discovery
A debate in the instructional community now questions the effectiveness of this model of instruction. The debate dates back to the 1950s when researchers first began to compare the results of discovery learning to other forms of instruction. In support of the fundamental concept of discovery learning, Bruner (1961) suggested that students are more likely to remember concepts if they discover them on their own as opposed to those that are taught directly.
In pure discovery learning, the learner is required to discover new content through conducting investigations or carrying out procedures while receiving little, if any, assistance. "For example, a science teacher might provide students with a brief demonstration of how perceptions of color change depending on the intensity of the light source and then ask them to design their own experiment to further examine this relationship". In this example the student is left to discover the content on his/her own. Because students are left to self-discovery of topics, researchers worry that learning taking place may have errors, misconceptions or be confusing or frustrating to the learner.
Research shows that cognitive demands required for discovery in young children may hinder learning as they have limited amounts of existing knowledge to integrate additional information. Bruner also cautioned that such discovery could not be made prior to or without at least some base of knowledge in the topic. Students who are presented with problems without foundational knowledge may not have the ability to work though solutions. The meta-analyses conducted by Alfieri and colleagues reconfirmed such findings.
Mayer (2004) argues that unassisted discovery learning tasks do not help learners discover problem-solving rules, conservation strategies, or programming concepts. He does acknowledge, however that while under some circumstances constructivist-based approaches may be beneficial, pure discovery learning lacks structure in nature and hence will not be beneficial for the learner. Mayer also points out that interest in discovery learning has waxed and waned since the 1960s. He argues that in each case the empirical literature has shown that the use of pure discovery methods is not suggested, yet time and time again researchers have renamed their instructional methods only to be discredited again, to rename their movement again.
Alternatively, direct ‘instruction where working examples, scaffolding techniques, explicit explanation and feedback are beneficial to learning (Alfieri, 2011). In addition, time spent practising newly learned concepts improves problem solving skills (Pas and Van Gog, 2006).
There appears to be benefits to both direct instruction and assisted discovery.
In special needs education
With the push for special needs students to take part in the general education curriculum, prominent researchers in this field doubt if general education classes rooted in discovery based learning can provide an adequate learning environment for special needs students. Kauffman has related his concerns over the use of discovery based learning as opposed to direct instruction. Kauffman comments, to be highly successful in learning the facts and skills they need, these facts and skills are taught directly rather than indirectly. That is the teacher is in control of instruction, not the student, and information is given to students (2002).
This view is exceptionally strong when focusing on students with math disabilities and math instruction. Fuchs et al. (2008) comment, Typically developing students profit from the general education mathematics program, which relies, at least in part, on a constructivist, inductive instructional style. Students who accrue serious mathematics deficits, however, fail to profit from those programs in a way that produces understanding of the structure, meaning, and operational requirements of mathematics ... Effective intervention for students with a math disability requires an explicit, didactic form of instruction ...Fuchs et al. go on to note that explicit or direct instruction should be followed up with instruction that anticipates misunderstanding and counters it with precise explanations.
However, few studies focus on the long-term results for direct instruction. Long-term studies may find that direct instruction is not superior to other instructional methods. For instance, a study found that in a group of fourth graders that were instructed for 10 weeks and measured for 17 weeks direct instruction did not lead to any stronger results in the long term than did practice alone. Other researchers note that there is promising work being done in the field to incorporate constructivism and cooperative grouping so that curriculum and pedagogy can meet the needs of diverse learners in an inclusion setting. However, it is questionable how successful these developed strategies are for student outcomes both initially and in the long term.
Effects on cognitive load
Research has been conducted over years to prove the unfavorable effects of discovery learning, specifically with beginning learners. "Cognitive load theory suggests that the free exploration of a highly complex environment may generate a heavy working memory load that is detrimental to learning". Beginning learners do not have the necessary skills to integrate the new information with information they have learned in the past. Sweller reported that a better alternative to discovery learning was guided instruction. According to Kirschner, Sweller and Clark (2006), guided instruction produces more immediate recall of facts than unguided approaches along with longer term transfer and problem-solving skills.
Enhanced discovery learning
Robert J. Marzano (2011) describes enhanced discovery learning as a process that involves preparing the learner for the discovery learning task by providing the necessary knowledge needed to successfully complete said task. In this approach, the teacher not only provides the necessary knowledge required to complete the task, but also provides assistance during the task. This preparation of the learner and assistance may require some direct instruction. "For example, before asking students to consider how best to stretch the hamstring muscle in cold weather, the teacher might present a series of lessons that clarify basic facts about muscles and their reaction to changes in temperature".
Another aspect of enhanced discovery learning is allowing the learner to generate ideas about a topic along the way and then having students explain their thinking. A teacher who asks the students to generate their own strategy for solving a problem may be provided with examples in how to solve similar problems ahead of the discovery learning task. "A student might come up to the front of the room to work through the first problem, sharing his or her thinking out loud. The teacher might question students and help them formulate their thinking into general guidelines for estimation, such as "start by estimating the sum of the highest place-value numbers". As others come to the front of the room to work their way through problems out loud, students can generate and test more rules".
Further reading
Bolton, David and Goodey, Noel. Trouble with verbs?: Guided Discovery Materials, Exercises and Teaching Tips at Elementary and Intermediate Levels (1999). Addlestone, Surrey: Delta Pub., London
Carin, Arthur. Guided Discovery Activities for Elementary School Science (1993). Merrill Publishing Company.
Nissani, Moti. "Dancing flies: a guided discovery illustration of the nature of science". American Biology Teacher 58: 168–171 (1996). DOI: 10.2307/4450108.
See also
Active learning
Cognitive load
Constructivism (learning theory)
Inquiry-based learning
Jerome Bruner
Moore method
Problem-based learning
Progressive education
Science education
References
Rachel Adelson (2004) Instruction vs. Exploration in Science Learning Monitor on Psychology APA Online, Vol 35, No 6.
Alfieri, L., Brooks, P. J., Aldrich, N. J., & Tenenbaum, H. R. (2011). Does discovery-based instruction enhance learning?. Journal of Educational Psychology, 103(1), 1-18.
Carroll, J., & Beman, V. (2015). "Boys, inquiry learning and the power of choice in middle school English classroom". Adolescent Success. 15(1): 4–17.
Dorier, J. L. & Garcia, J. F. (2013). "Challenges and opportunities for the implementation of inquiry-based learning in day-to-day teaching". ZDM Mathematics Education. 45: 837–849
Huang, X. (2014). "Math crisis: Political game or imagined problem?" Our Schools/Our Selves. 73–85.
Mandrin, P., & Preckel, D. (2009). "Effect of Similarity-Based Guided Discovery Learning on Conceptual Performance". School Science And Mathematics, 109(3), 133–145.
Monroe, P. (Ed.). (1911). "Discovery, method of". A Cyclopedia of Education Vol. 2, p. 336. New York, NY: The Macmillan Company.
Paas, F., & Van Gog, T. (2006). "Optimising worked example instruction: different ways to increase German cognitive load". Learning and Instruction. 16(2):87–91
Stokke, A. 2015. What to do about Canada's declining math scores. C. D. Howe Institute. Commentary 427.
External links
The Discovery Learning Project at the College of Natural Sciences of the University of Texas at Austin
Carleton College. Guided discovery problems: Examples (in: Teaching Methods: A Collection of Pedagogic Techniques and Example Activities)
The Discovery Learning: Old website To Have A Better Life
Education Quality and Accountability Office (EQAO). http://www.eqao.com/en/assessments/results/communication-docs/provincial-report-highlights-elementary-2017.pdf
Science exercises and instructional materials: Teaching science as if minds mattered!
Garelick, Barry (2009). Discovery Learning in Math: Exercises versus Problems, Nonpartisan Education Review / Essays, 5(2).
Makina, (2019).
Educational psychology
he:למידת חקר | 0.770751 | 0.986291 | 0.760184 |
Social penetration theory | The social penetration theory (SPT) proposes that as relationships develop, interpersonal communication moves from relatively shallow, non-intimate levels to deeper, more intimate ones. The theory was formulated by psychologists Irwin Altman of the University of Utah and Dalmas Taylor of the University of Delaware in 1973 to understand relationship development between individuals. Altman and Taylor noted that relationships "involve different levels of intimacy of exchange or degree of social penetration". SPT is known as an objective theory as opposed to an interpretive theory, meaning it is based on data drawn from actual experiments and not simply from conclusions based on individuals' specific experiences.
SPT states that the relationship development occurs primarily through self-disclosure—when one intentionally reveals information such as personal motives, desires, feelings, thoughts, and experiences to others. This theory assumes that as people becomes closer with others, positive reinforcement through positive interactions allow people to achieve deeper levels of intimacy. The theory is also guided by the assumptions that relationship development is systematic and predictable. SPT also examines the process of de-penetration, how some relationships regress over time, and eventually end.
Assumptions
SPT is based on four basic assumptions:
Relationship development moves from superficial layers to intimate ones. For instance, people tend to present their outer images only, talking about hobbies on a first date. As the relational development progresses, wider and more controversial topics such as political views are included in the dialogues.
Interpersonal relationships develop in a generally systematic and predictable manner. This assumption indicates the predictability of relationship development. Although it is impossible to foresee the exact and precise path of relational development, there is a certain trajectory to follow. As Altman and Taylor note, "[p]eople seem to possess very sensitive tuning mechanisms which enable them to program carefully their interpersonal relationships."
Relational development can move backward, resulting in de-penetration and dissolution. For example, after prolonged and fierce fights, a couple who originally planned to get married may decide to break up and ultimately become strangers.
Self-disclosure is the key to facilitate relationship development, and involves disclosing and sharing personal information to others. It enables individuals to know each other and plays a crucial role in determining how far a relationship can go, as gradual exploration of mutual selves is essential in the process of social penetration.
Self-disclosure
Self-disclosure is a purposeful disclosure of personal information to another person. Disclosure may include sharing both high-risk and low-risk information as well as personal experiences, ideas, attitudes, feelings, values, past facts and life stories, future hopes, dreams, ambitions, and goals. In sharing information about themselves, people make choices about what to share and with whom to share it. Altman and Taylor believe that opening the inner self to the other individual is the main path to reaching intimate relationships.
As for the speed of self-disclosure, Altman and Taylor were convinced that the process of social penetration moves quickly in the beginning stages of a relationship and slows down considerably in the later stages. Those who are able to develop a long-term, positive reward/cost outcome are the same people who are able to share important matches of breadth categories. The early reward/cost assessment has a strong impact on the relationship's reactions, involvement, and expectations in a relationship regarding the future, play a major role in the outcome of the relationship.
Uncertainty reduction theory
The uncertainty reduction theory (URT) is the process that people experience as they begin new relationships. When two strangers meet, they engage by asking each other questions in order to build a stronger relationship. In the context of both URT and SPT, questions are seen as a tool to learn information about the other in order to receive rewards. These rewards are either physical/material rewards, or abstract rewards that supplement the relationship as it develops.
Through this process of asking questions in a new relationship, uncertainty and anxiety can be reduced and lead to a more developed relationship between the two people. Where social penetration theory postulates that new relationships (either romantic or platonic) steadily evolve into deeper conversations and interactions, uncertainty reduction theory postulates that these new relationships can reach that deep level through question and answer processes. Although SPT primarily focuses on the linear trajectory of the relationship as the two parties get a deeper understanding of one another, URT is relevant in that it focuses on each instance when uncertainty may need to be reduced through question asking on a case-by-case basis (e.g. the two people initially meet and questions are asked; later on in the relationship, one party asks the other to meet their parents, and the two engage in URT to reduce the anxiety and uncertainty surrounding the situation).
Disclosure reciprocity
Self-disclosure is reciprocal, especially in the early stages of relationship development. Disclosure reciprocity is an indispensable component in SPT, and is a process where one person reveals personal information of a certain intimacy level, and the other person discloses information of the same level. It is two-way disclosure, or mutual disclosure. Disclosure reciprocity can induce positive and satisfactory feelings and drive forward relational development, because as mutual disclosure takes place between individuals, they might feel a sense of emotional equity. Disclosure reciprocity occurs when the openness of one person is reciprocated with the same degree of the openness from the other person. For instance, if someone was to bring up their experience with an intimate topic such as weight gain or having divorced parents, the person they are talking to could reciprocate by sharing their own experience.
Self-disclosure being reciprocated is also a forming foundation for interpersonal relationships. If self-disclosure is not reciprocated in an interpersonal relationship, it moves the relationship to potentially face a stage of de-penetration or "slow deterioration" (West, 2018). This can happen in a few ways, such as oversharing and undersharing. Oversharing personal information can lead to the end of the relationship, as "[s]ome partners may be ill-equipped and underprepared to know someone so intimately" (West, 2018). Since self-disclosure depends on going back and forth, if one partner does not share in that, it leads to an imbalance in the relationship, and the bond will unlikely progress since the other partner only knows a certain amount. This also causes the partner who shares or discloses not wanting to disclose any further, hindering the interpersonal relationship's progress because "[t]he greater the depth, the more opportunity for a person to feel vulnerable" (West, 2018). Vulnerability leads the partner to believe that there can be an interpersonal relationship as it translates into trust for partners' relationship.
Onion model
SPT uses the onion model, which visualizes self-disclosure as a process of removing layers. The onion denotes various layers of personality. It is sometimes called the "onion theory" of personality. Three major factors influence self-revelation and begin the process of the onion theory: personal characteristics, reward/cost assessments, and the situational context.
Stages
Relationship development is not automatic, but occurs through the skills of partners in revealing or disclosing first their attitudes and later their personalities, inner character, and true selves. This is done in a reciprocal manner. The main factor that acts as a catalyst in the development of relationships is proper self disclosure. Altman and Taylor proposes that there are four major stages in social penetration:
The orientation stage: individuals engage in small talk and simple, harmless clichés like, 'Life's like that'. This first stage follows the standards of social desirability and norms of appropriateness. The outer images are presented and peripheral information are exchanged. The most, but least intimate information is given here.
The exploratory affective stage: individuals start to reveal the inner self bit by bit, expressing personal attitudes about moderate topics such as government and education. This may not be the whole truth as individuals are not yet comfortable to lay themselves bare. This is the stage of casual friendship, and many relationships do not go past this stage.
The affective stage: individuals are getting more comfortable to talk about private and personal matters, and there are some forms of commitment in this stage. Personal idioms, or words and phrases that embody unique meanings between individuals, are used in conversations. Criticism and arguments may arise. A comfortable share of positive and negative reactions occur in this stage. Relationships become more important, meaningful, and enduring to both parties. It is a stage of close friendships and intimate partners.
The stable stage: the relationship now reaches a plateau in which some of the deepest personal thoughts, beliefs, and values are shared and each can predict the emotional reactions of the other person. This stage is characterized with complete openness, raw honesty and a high degree of spontaneity. The least, but most intimate information is given here.
De-penetration stage (optional): when the relationship starts to break down and costs exceed benefits, there is a withdrawal of disclosure that causes the relationship to end.
De-penetration
De-penetration is a gradual process of layer-by-layer withdrawal that causes relationship and intimacy levels to regress and fade away. According to Altman and Taylor, when de-penetration occurs, "interpersonal exchange should proceed backwards from more to less intimate areas, should decrease in breadth or volume, and, as a result, the total cumulative wedge of exchange should shrink". A warm friendship between two people will deteriorate if they begin to close off areas of their lives that had earlier been opened. Relationships are likely to break down not in an explosive argument but in a gradual cooling off of enjoyment and care. Tolstedt and Stokes note that in the de-penetration process, self-disclosure breadth reduces and self-disclosure depth increases. This is because when an intimate relationship is dissolving, a wide range of judgments, feelings and evaluations, particularly the negative ones, are involved in conversations.
Idiomatic communication in self-disclosure
Within the coming together and falling apart stages of a relationship, partners oftentimes use unique forms of communication, such as nicknames and idioms, to refer to one another. This is known as idiomatic communication, a phenomenon that is reported to occur more often among couples in the coming together stages of a relationship. Couples falling apart reported that idiomatic communication, which can include teasing insults and other personally provocative language, have an adverse effect overall on the relationship.
Breadth and depth
Both depth and breadth are related to the onion model. As the wedge penetrates the layers of the onion, the degree of intimacy and the range of areas in an individual's life that an individual chooses to share increases.
The breadth of penetration is the range of areas in an individual's life being disclosed, or the range of topics discussed. For instance, one segment could be family, a specific romantic relationship, or academic studies. Each of these segments or areas are not always accessed at the same time. One could be completely open about a family relationship while hiding an aspect of a romantic relationship for various reasons such as abuse or disapproval from family or friends. It takes genuine intimacy with all segments to be able to access all areas of breadth at all times.
The depth of penetration is a degree of intimacy; as individuals overcome common anxiety over self-disclosure, intimacy builds. Deeper intimacy facilitates relational trust and encourages further conversation about deeper things than would be discussed in everyday conversation. This deepening occurs in many types of relationships: friendship, familial, peer, and romantic.
It is possible to have depth without breadth and vice versa. For instance, depth without breadth could be where only one area of intimacy is accessed. "A relationship that could be depicted from the onion model would be a summer romance. This would be depth without breadth." On the other hand, breadth without depth would be simple everyday conversations. An example would be when passing by an acquaintance and saying, "Hi, how are you?" without ever really expecting to stop and listen to what this person has to say.
The relationship between breadth and depth can be similar to that used in modern technology. Pennington describes in a study that
Because of social media, the breadth of subjects can be wide, as well as the depth of those using the platforms. Users of these platforms seem to feel obligated to share simple information as was listed by Pennington, but also highly personal information that can now be considered general knowledge. Because of social media platforms and users' willingness to share personal information, the law of reciprocity is replaced by divulging personal information to countless followers and friends without them reciprocating the same level of vulnerability. In cases like this, there is depth without much breadth.
Barriers
Several factors can affect the amount of self-disclosure between partners: gender, race, religion, personality, social status, and ethnic background. For example, American friends tend to discuss intimate topics with each other, whereas Japanese friends are more likely to discuss superficial topics. One might feel less inclined to disclose personal information if doing so would violate their religious beliefs. Being part of a religious minority can also influence how much one feels comfortable in disclosing personal information. In romantic relationships, women are more likely to self-disclose than their male counterparts. Men often refrain from expressing deep emotions out of fear of social stigma. Such barriers can slow the rate of self-disclosure and even prevent relationships from forming. In theory, the more dissimilar two people are, the more difficult or unlikely self-disclosure becomes.
Stranger-on-the-train phenomenon
Most of the time individuals engage in self-disclosure strategically, carefully evaluating what to disclose and what to be reserved, since disclosing too much in the early stage of relationship is generally considered inappropriate, and can end or damage a relationship. In certain contexts, self-disclosure does not follow the pattern. This exception is known as the "stranger-on-the-train" phenomenon, in which individuals rapidly reveal personal information with complete strangers in public spaces. This specific concept can be known as verbal leakage, which is defined by Floyd as "unintentionally telling another person something about yourself". SPT operates under the impression that the self-disclosure given is not only truthful, meaning the speaker believes what is being said to be true, but intentional. Self-disclosure can be defined as "the voluntary sharing of personal history, preferences, attitudes, feelings, values, secrets, etc., with another person". The information given in any relationship, whether acquaintance or a well-established relationship, should be voluntarily shared, otherwise it does not follow the laws of reciprocity and is considered verbal leakage, or the stranger-on-the-train phenomenon. Some researchers argue that revealing our inner self to complete strangers is deemed as "cathartic exercise" or "service of confession", which allows individuals to unload emotions and express deeper thoughts without being haunted by potential unfavorable comments or judgements. This is because people tend to take lightly and dismiss responses from strangers, who do not really matter in their lives. Some researchers suggest that this phenomenon occurs because individuals feel less vulnerable to open up to strangers who they do not expect to see again.
Sexual communication anxiety among couples
The rate of sexual satisfaction in relationships has been observed to relate directly to effective communication between couples. Individuals in a relationship who experience anxiety find it difficult to divulge information regarding their sexuality and desires due to the perceived vulnerabilities in doing so. In a study published by the Archives of Sexual Behavior, socially anxious individuals generally attribute potential judgement or scrutiny as the main instigators for any insecurities in self-disclosing to their romantic partners. This fear of intimacy, and thus a lower level of sexual self-disclosure within a relationship, is predicted to correlate to a decrease in sexual satisfaction.
Rewards and costs assessment
Social exchange theory
Social exchange theory states that humans weigh each relationship and interaction with another human on a reward–cost scale without realizing it. If the interaction was satisfactory, then that person or relationship is looked upon favorably. When there are positive interactions that produce good reward/cost calculations, the relationship is likely to be more satisfying. If an interaction was unsatisfactory, then the relationship will be evaluated for its costs compared to its rewards or benefits. People try to predict the outcome of an interaction before it takes place. From a scientific standpoint, Altman and Taylor were able to assign letters as mathematical representations of costs and rewards. They also borrowed the concepts from Thibaut and Kelley's in order to describe the relation of costs and rewards of relationships. Thibaut and Kelley's key concepts of relational outcome, relational satisfaction, and relational stability serve as the foundation of Irwin and Taylor's rewards minus costs, comparison level, and comparison level of alternatives.
Applications
Interpersonal communication
The value of SPT initially lies in the area of interpersonal communication. Scholars have been using the concepts and onion model to explore the development of counter-sex/romantic relationships, friendships, parent-child relationships, employer-employee relationships, caregiver-patient relationships and beyond. Some of the key findings are described as follows.
Researchers have found that in parent-child relationships, information derived from the child's spontaneous disclosure in daily activities was most closely connected to generating and maintaining their trust in parents, indicating the importance of developing shallow but broad relationships with children through everyday conversation rather than long-lasting profound lectures (Kerr, Stattin & Trost, 1999). Honeycutt used the SPT model and the Attraction Paradigm to analyze happiness between married couples. While the SPT model believes that relationships are grounded on effective communication, the Attraction Paradigm believes that relationships are grounded on having shared interests, personality types, and beliefs. The results showed that having a perceived understanding of each other can lead to happiness between married couples. While research notes that it looks only at perceived understanding and not actual understanding, it shows the importance of relationship development. The more that partners in a relationship interact with each other, the more likely they are to understand each other better. Scholars also use this theory to examine other factors influencing the social penetration process in close friendships. As Mitchell and William (1987) state, ethnicity and sex have an impact on friendships. The survey results indicates that more breadth of topics occurs in penetration process in black friendships than white. Regarding caregiver–patient relationships, developing a social penetrated relationship with institution disclosed breadth and depth information and multiple effective penetration strategies is critical to the benefits of the patients (Yin & Lau, 2005).
Gender-based difference in self-disclosure
Research demonstrates that there are significant gender differences in self-disclosure, particularly emotional self-disclosure, or expressing personal feelings and emotions, such as, "Sometimes, I feel lonely to study abroad and to be away from my family." Emotional self-disclosure is at the core of intimate relationship development, because unlike factual (descriptive) self-disclosure or superficial self-relevant facts, it is more personal and more effective to cultivate intimacy. Emotional self-disclosure makes individuals "transparent" and vulnerable to others. According to previous studies, females are more socially oriented, whereas males are more task-oriented, and thus females are believed to be more socially interdependent than males. In a friendship between females, emotional attachments such as sharing emotions, thoughts, experiences, and supports are fundamental, while friendships between males tend to focus on activities and companionship. Overall, women's friendships are described as more intimate than men's friendships.
In addition, there is a gender difference regarding to topics revealed. Men tend to disclose their strengths, while women disclose their fears more. Both men and women are prone to disclose their emotions to same-sex friends more, but women are prone to reveal more than men to both same-sex as well as cross-sex friends. According to research conducted among Pakistani students, women extensively disclose their feelings, while emotions such as depression, anxiety and fear are more likely being disclosed to male friends, because men are perceived as more capable to deal with such emotions.
Self-disclosure in intercultural relationships
Research reveals that there are multiple obstacles and tensions that occur within intercultural and interracial relationships that do not exist in intracultural and intraracial relationships. These challenges are due to the different norms and ideals a person learns within their racial, ethnic, and national group contexts, meaning an individual will feel more comfortable and understood by those who learned and share the same coordinated meanings.
The first obstacle that may occur is in the initial meeting, since cultural and racial differences can hinder a relationship from forming. If a connection develops, the next obstacle is in self-disclosure. Through self-disclosure, the relationship evolves from the superficial orientation stage to a more intimate, understanding level.
Self-disclosure in the LGBT community
Minority groups have a unique way of creating closeness between each other. For example, lesbian friendships and intimate relationships are reliant on mutual self-disclosure and honesty. Both parties must expose themselves for an authentic and genuine relationship to develop. The problem is that for many lesbians, this process is not always as simple as it may seem. Exposing one's sexual orientation can be a difficult and grueling process and because of this, many lesbians avoid disclosing their true identities to new acquaintances, which leads them to turn to their family members or already existing social support systems, and can strain or reduce those relationships. Because of these difficulties, lesbians will limit who they choose to surround themselves with. Many involve themselves in groups that solely consist of lesbians or solely consist of heterosexual women to avoid their true lesbian identity. It can be difficult for lesbian individuals to open-up about their sexual identities, because of the fear of being rejected or losing special relationships.
A study was done to examine self-disclosure among LGBT youths. Through a series of interviews, one group described their coming out experiences. They told the interviewers about who they chose to disclose their sexual orientation to and whether the disclosure had a positive or negative effect on their relationships. Results showed that more youths disclosed their sexual orientation to their friends than to their parents. A number of participants chose to disclose their sexual orientation to their teachers. Results also showed both positive and negative reactions. Some youth expressed de-penetration in their friendships after coming out, as well as de-penetration in their sibling relationships. Some participants expressed experiencing other reactions beside positive and negative. There were invalidated reactions, where a participant's sexual orientation was dismissed as a “phase”, and neutral reactions, where the recipient of the disclosure informed the participant that they were already aware of their sexual orientation. Some participants expressed having mixed and evolving results. For example, a participant who identified as a transgender man said that his mother was initially fine with his sexual orientation, which at the time was a lesbian, but had a negative reaction when he later came out as transgender. A few participants mentioned that they had initially received negative reactions from friends and family after coming out, but that as time went on, their sexual orientation came to be accepted and the relationships remained intact.
LGBT professionals often feel anxiety about disclosing their sexual orientation to their colleagues. Professionals who chose to disclose their sexual orientation have had mixed reactions in how it has affected their relationship with their colleagues. Some had had positive reactions, strengthening their relationships and their overall job satisfaction, while others have had the opposite experience. They feel that disclosing their sexual orientation hurt their professional relationships and their overall job satisfaction. The atmosphere of one's office can influence their decision to disclose their sexual orientation. If their colleagues are themselves LGBT or if they have voiced support for the LGBT community, the more likely they are to disclose their sexual orientation. If they have little to no colleagues who are openly part of the LGBT community or if there is no vocal support, the less likely they are to come out.
According to a study, LGBT people have different ways of coming out. These varying methods of disclosure include pre-planned, in which someone decides to arrange a conversation; emergent, in which someone decides to come out based on an ongoing conversation; coaxed, in which someone encouraged to come out by someone else; forced, in which someone is coerced to come out; romantic, in which someone comes out by making romantic or sexual advances; or educational, in which someone comes out in order to educate or encourage others, usually in front of an audience.
Patient self-disclosure in psychotherapy
Patient self-disclosure has been a prominent issue in therapy, particularly in psychotherapy. Early studies have shown that patients' self-disclosure is positively related to treatment outcomes. Freud is a pioneer in encouraging his patients to totally open up in psychotherapy. Many early clinical innovations, such as lying on the couch and therapist's silence, are aimed to create an environment, an atmosphere, that allows patients to disclose their deepest self, and free them from concerns facilitating conscious suppression of emotions or memories. Even with such efforts, Barry A. Farber says that in psychotherapy, "full disclosure is more of an ideal than an actuality". Patients are prone to reveal certain topics to the therapists, such as disliked characteristics of themselves, social activities, as well as relationships with friends and significant ones; and tend to avoid discussing certain issues, such as sexual-oriented experiences, immediately experienced negative reactions (e.g. feeling misunderstood or confused) due to conscious inhibition.
In psychotherapy, patients have to deal with the tension between confessional relief and confessional shame all the time. It has been shown that the length of therapy and the strength of the therapeutic alliance (the bond between the patient and the therapist) are two major factors that affect self-disclosure in psychotherapy. As SPT indicates, the longer patients spent time with their therapists, the range of issues being discussed broadens, and more topics are marked with depth. The greater the depth of the discussions, the more likely the patient feels being vulnerable. To strengthen the alliance, cultivating a comfortable atmosphere for self-disclosure and self-discovery is important.
Ethical decision making
Ethical and moral decision making has been the topic of contentious academic debate for some time. According to a study, SPT was found to be one of the most applicable communication theories to explain the way people make their decisions based on their ethical and moral compass. The theory shows strong correlation between self-disclosure and reinforcement patterns, which are shown to have a big impact on one's perceived ethical code. This can be applied to a number of fields including communications, psychology, ethics, philosophy, and sociology.
Patient/therapist self-disclosure
The condition of patients with eating disorders have been shown to improve with therapist self-disclosure. In 2017, a study was conducted surveying 120 participants (95% of which were women). For the purpose of the study, appropriate therapist self-disclosure was defined as sharing positive feelings towards participants in therapy and discussing one's training background.
The results found that 84% of people said their therapist disclosed positive feelings to them while in therapy. The study found that when therapists disclosed positive feelings, it had a positive effect on the patient's eating problems. Eating disorders generally got better with therapist self-disclosure. When the therapist shared self-referent information to the patient it created trust and the patients perceived the therapist as being more "human." Patients with eating disorders saw the therapist disclosure as a strengthening therapeutic relationship. However, personal self-disclosure of the therapist ‒ sexuality, personal values, and negative feelings ‒ was considered inappropriate by patients.
Self-disclosure and individuals with social anxiety disorder
Social anxiety disorder (SAD) is a disorder in which individuals experience overwhelming levels of fear in social situations and interactions. They tend to adopt strategic avoidance of social interactions, which makes it challenging for them to disclose themselves to others and reveal emotions. Self-disclosure is the key to foster intimate relationships, in which individuals can receive needed social support. Close friendships and romantic relationships are two major sources for social supports, which have protective effects and play a crucial role in helping individuals with social phobia to cope with distress. Due to the profound impacts of the anxiety disorder, it has been found that late marriage or staying unmarried is prevalent among individuals with SAD. This is problematic, because being unable to gain needed social supports from intimate ones further confines the social phobic in the loneliness and depression that they have been suffering from. In response to the problem, Sparrevohn and Rapee suggest that improving communication skills, particularly self-disclosure and emotional expression, should be included in future social phobia treatment, so the quality of life of individuals with social phobia can be improved.
Server-patron mutual disclosure in restaurant industry
As social penetration theory suggests, disclosure reciprocity induces positive emotion which is essential for retaining close and enduring relationships. In the service industry, compared with securing new customers, maintaining long-term relationships with existing customers is more cost-effective. Hwang et al. indicates that mutual disclosure is the vital factor for establishing trust between customers and servers. Effective server disclosure, such as sincere advice about menu choices and personal favorite dishes, can elicit reciprocity of information exchange between servers and customers. The received information regarding to the taste and preference of the customers then can be used to provide tailored services, which in turn can positively strengthen customers' trust, commitment and loyalty toward the restaurant.
Hwang et al. suggest that server disclosure is more effective to evoke customer disclosure in female customers, who are more likely to reveal personal information than their male counterparts. In addition, studies have shown that factors such as expertise (e.g. servers' knowledge and experience), customer-oriented attribute (e.g. listening to the concerns from the customers attentively), as well as marital status influence mutual disclosure in the restaurant setting. Expertise is positively correlated to both customer and server disclosure. Server disclosure is only effective in inducing disclosure and positive feelings from unmarried patrons who feel more comfortable to have a conversation with the servers.
Organizational communication
The ideas posited by the theory have been researched and re-examined by scholars when looking at organizational communication. Some scholars explored the arena of company policy making, demonstrating that the effect company policies have on the employees, ranging from slight attitudinal responses (such as dissatisfaction) to radical behavioral reactions (such as conflicts, fights and resignation). In this way, sophisticated implementation of controversial policies is required (Baack, 1991). SPT offers a framework allowing for an explanation of the potential issues.
Media-mediated communication
Self-disclosure in reality TV
Reality television is a genre characterized by real-life situations and very intimate self-disclosure. Self-disclosure on reality shows can be considered to be self-disclosure by media characters, and the relationship between the audience and the media character is parasocial.
In reality shows, self-disclosure is usually delivered as monologue, which is similar real-life self-disclosure and gives the audience the illusion that the messages are directed to them. According to social penetration theory, self-disclosure should follow certain stages, moving from the superficial layers to the central layers gradually. Nonetheless, rapid self-disclosure of intimate layers is a norm in reality TV shows, and unlike interpersonal interactions, viewers prefer early intimate disclosure and such disclosure leads to positive rather than uncomfortable feelings.
Computer-mediated communication
Computer-mediated communication (CMC) is another way in which people can develop relationships. Technology is seen as a medium that connects people, who would otherwise be strangers, through shared interests or cultures. The Internet has been thought to broaden the way people communicate and help build relationships by providing a medium in which people can be open-minded and unconventional, and circumvent traditional limitations like time and place. (Yum & Hara, 2005) Before social media and online dating sites, strangers communicated with each other through pen-pal organizations or face-to-face in public locations. With the influx of CMC and the advancement of technology, strangers can decide whether they will invest time in and develop a relationship based on information that is provided in a profile. When someone sees that a person included a similar interest to them in their profile, the uncertainty becomes reduced and the two strangers utilize CMC to connect over their shared interests.
As time has progressed, the stigma around online dating has reduced significantly and more research on SPT and CMC is being done. When engaging in a new relationship through CMC, there are some missing elements and nonverbal cues, which increases the uncertainty in the relationship. With the prominent use of online dating services, relationship development has changed. Before CMC influenced relationships, couples solely relied on face-to-face interactions, nonverbal cues, and first impressions to decide if they would continue to develop the relationship further. The introduction of CMC in romantic relationships has added an element for all parties to consider when beginning their relationships.
Some researchers found that self-disclosure online tends to reassure people that if they are rejected, it is more likely to be by strangers and not family or friends, which reinforces the desire to self-disclose online rather than face-to-face. Not only are people meeting new people to make friends, but many people are meeting and initiating romantic relationships online. (Yum & Hara, 2005) In another study, it was found that "CMC dyads compensated for the limitations of the channel by making their questions more intimate than those who exhibited face-to-face" (Sheldon, 2009).
Celebrities' self-disclosure on social media
On social media, the boundaries between interpersonal and mass communication is blurred, and parasocial interaction (PSI) is adopted strategically by celebrities to enhance liking, intimacy, and credibility from their followers. As Ledbetter and Redd notes, "During PSI, people interact with a media figure, to some extent, as if they were in an actual interpersonal relationships with the target entity." For celebrities, professional self-disclosure (e.g. information about upcoming events) and personal self-disclosure such as emotions and feelings are two primary ways to cultivate illusory intimacy with their followers and to expand their fan bases. Unlike real-life interpersonal relationships, disclosure reciprocity is not expected in parasocial interactions, although through imagined interactions on social medias, followers feel they are connected to the media figures.
Social networking
Self-disclosure has been studied when it comes to face-to-face interactions. There have been surveys conducted about how social networking sites such as Facebook, MySpace, Twitter, LinkedIn, hi5, myyearbook, or Friendster affect interactions between human beings. On Facebook, users are able to determine their level and degree of self-disclosure by setting their privacy settings. People achieve breadth by posting about their lives and sharing surface information, and develop intimate relationships with depth by sending private Facebook messages and creating closed groups.
The level of intimacy that one chooses to disclose depends on the type of website they are using to communicate. Disclosing personal information online is a goal-oriented process; if one's goal is to build a relationship with someone, they would likely disclose personal information over instant messaging (IM) and on social media. It is highly unlikely that they would choose to share that information in a website that is used for online shopping. With online shopping, the goal is to make a purchase, so the individual would share only the information needed (i.e. name and address) to make a purchase. When disclosing information over IM and in social media, the individual is much more selective in what they choose to disclose.
"The hyperpersonal perspective suggests that the limited cues in CMC are likely to result in over attribution and exaggerated or idealized perceptions of others and that those who meet and interact via CMC use such limited cues to engage in optimized or selective self-presentation". (Walther, 1996) There is the possibility that someone could mislead another person because there are more opportunities to build a more desirable identity without fear of persecution. If there is no chance of ever meeting the person on the other end of the computer, then there is a high risk of falsifying information and credentials.
Research has been done to see what kinds of people tend to benefit most from online self-disclosure. The "social compensation" or "poor-get-richer" hypothesis (Sheldon, 2009) suggests that those who have poor social networks and social anxiety can benefit by disclosing themselves freely and creating new relationships through the Internet (Sheldon, 2009). However, other research has been performed to observe that extraverts are more likely to disclose information online. This brings in the "rich-get-richer" hypothesis (Sheldon, 2009), which states that "the Internet primarily benefits extraverted individuals... [and] online communication... increases the opportunities for extraverted adolescents to make friends... [the research concluded that] extraverted individuals disclosed more online than introverted" (Sheldon, 2009).
Another study found that while it may be easier for many people to disclose information and dive into their social penetration more quickly online, it also had less favorable outcomes for the closeness individuals may feel when disclosing information online as opposed to in person.
Online dating
Some scholars posit that when initiating a romantic relationship, there are important differences between internet dating sites and other spaces, such as the depth and breadth of the self-disclosed information given before they go further to one-on-one conversation. Studies have shown that in real life, adolescents tend to engage in sexual disclosure according to the level of relationship intimacy, which supports the social penetration model; in cyberspace, men present a stronger willingness and interest to communicate without regarding the current intimacy status or degree. There are also many counter-examples of the theory that exist in romantic relationship development. Some adolescents discuss the most intimate information when they first meet online or have sex without knowing each other thoroughly. Contrary to the path stated by SPT, the relationship would have developed from the core – the highest depth – to the superficial surface of large breadth. In this way, sexual disclosure on the part of adolescents under certain circumstances departs from the perspective of SPT.
Gibbs, Ellison, and Heino conducted a study analyzing self-disclosure in online dating. They found that the desire for an intimate face-to-face relationship could be a decision factor into how much information one chose to disclose in online dating. This might mean presenting an honest depiction of one's self online as opposed to a positive one. Having an honest depiction can potentially prevent dating from occurring, especially if the depiction is seen as negative. This could be beneficial, as it would prevent the formation of a relationship that would likely fail. It could also cause the potential date to self-disclose about themselves in response, adding to the possibility of making a connection.
Some individuals might focus more on having a positive depiction, which may cause them to be more selective in the information they disclose. An individual who presents themselves honestly could argue that disclosing their negative information is necessary as in a long-term relationship, one's partner would eventually learn of their flaws. An individual who presents themselves positively could argue it is more appropriate to wait until the relationship develops before sharing negative information.
In a separate study, Ellison, Heino, and Gibbs analyzed specifically how one chose to present themselves in online dating. They found that most individuals thought of themselves as being honest in how they presented themselves, and that they could not understand why someone would present themselves dishonestly. Most people present an ideal self – what one would like themselves to be as opposed to what they actually are in reality. One could justify this by believing that they could become their ideal self in the future. Some users might present themselves in a way that is not necessarily false, but not fully true either. For example, one could say that they enjoy activities such as scuba diving and hiking, but go several years without partaking in them. This could come across as misleading to a potential date who partakes in these activities regularly. Weight is a common area in which one might present an ideal self as opposed to an honest self. Some users might use older pictures or lie about their weight with the intention of losing it. For some individuals, they might present themselves in a way that is inaccurate but is how they see themselves. This is known as the "foggy mirror" phenomenon.
Blogging and online chatting
With the advent of the Internet, blogs and online chatrooms have become ubiquitous. Generally, those who blog on a professional level do not disclose personal information; they only disclose information relative to the company they work for. However, those who blog on a personal level have also made a career out of their blogging – there are many who are making money for sharing their lives with the world.
According to Jih-Hsin Tang and Cheng-Chung Wang, bloggers tend to have significantly different patterns of self-disclosure for different target audiences. The online survey that asked 1,027 Taiwanese bloggers examined the depth and breath of what bloggers disclosed to the online audience, best friends, and parents, as well as nine topics they discussed. Tang and Wang (2012), based on their research study on the relationship between the social penetration theory and blogging, discovered that "bloggers disclose their thoughts, feelings, and experiences to their best friends in the real world the deepest and widest, rather than to their parents and online audiences. Bloggers seem to express their personal interests and experiences in a wide range of topics online to document their lives or to maintain their online social networks." Dietz-Uhler, Bishop Clark, and Howard studied online chatting, and noted that "once a norm of self-disclosure forms, it is reinforced by statements supportive of self-disclosures but not of non-self disclosures".
Cross-cultural social penetration
Studies have rarely considered the differences that cultural nuances can play in social penetration, particularly when it is between two cultures which are either high context or low context. However, it was found that social penetration theory can be generalized to North American-Japanese dyads, which was further supported when comparing the research to the highest level of intimacy marital communication. The same held to be true when it came to analyzing the level of intimacy between different dyads across the spectrum, but it was found that “the results from the analysis of the dispersion scores revealed that mixed dyads had significantly less agreement than low intimacy dyads on the amount of personalized communication and less, but not significantly less, agreement than low intimacy dyads.” Therefore, conflict came more from differences in intimacy than from differences in cultural contexts. The study also found that opposite-sex dyads were generally more personalized than same-sex dyads, regardless of culture. Perceived difficulty of communication had a high negative correlation, suggesting that as communicative difficulty is reduced, a relationship may grow. The opposite was found to be true.
Criticism
One of the common criticisms of SPT is that it can have a narrow, linear approach to explaining how human beings interact with one another and disclose information. SPT also focuses more on early stages of human connection, and does not take into account the various ways people get close, and how multi-layered and varied closer relationships can be. It does not apply as well to coworkers, neighbors, acquaintances, or other forms of fleeting relationships, and has been criticized for assuming all relationships will follow the same direction. Likewise, the theory is criticized for not being as concise when describing established relationships, such as lifelong friends, family members, or couples that have been married for several decades and would presumably be as intimate as possible. Another concept called into question is the idea of reciprocity and when it is the most impactful. It is assumed that reciprocity is highest in the middle stages of a relationship rather than later on as SPT suggests.
See also
Personal boundaries
References
Further reading
Interpersonal communication | 0.770692 | 0.986311 | 0.760141 |
Cultural competence | Cultural competence, also known as intercultural competence, is a range of cognitive, affective, behavioural, and linguistic skills that lead to effective and appropriate communication with people of other cultures. Intercultural or cross-cultural education are terms used for the training to achieve cultural competence.
Effective intercultural communication comprises behaviors that accomplish the desired goals of the interaction and parties involved. It includes behaviors that suit cultural expectations, situational characteristics, and characteristics of relationship.
Characteristics
Individuals who are effective and appropriate in intercultural situations display high levels of cultural self-awareness and understand the influence of culture on behavior, values, and beliefs. Cognitive processes imply the understanding of situational and environmental aspects of intercultural interactions and the application of intercultural awareness, which is affected by the understanding of the self and own culture. Self-awareness in intercultural interactions requires self-monitoring to censor anything not acceptable to another culture. Cultural sensitivity or cultural awareness leads the individual to an understanding of how their own culture determines feelings, thoughts, and personality.
Affective processes define the emotions that span during intercultural interactions. These emotions are strongly related to self-concept, open-mindedness, non-judgmentalism, and social relaxation. In general, positive emotions generate respect for other cultures and their differences. Behavioral processes refer to how effectively and appropriately the individual directs actions to achieve goals. Actions during intercultural interactions are influenced by the ability to clearly convey a message, proficiency with the foreign language, flexibility and management of behavior, and social skills.
Creating intercultural competence
Intercultural competence is determined by the presence of cognitive, affective, and behavioral abilities that directly shape communication across cultures. These essential abilities can be separated into five specific skills that are obtained through education and experience:
Mindfulness: the ability of being cognitively aware of how the communication and interaction with others is developed. It is important to focus more in the process of the interaction than its outcome while maintaining in perspective the desired communication goals. For example, it would be better to formulate questions such as "What can I say or do to help this process?" rather than "What do they mean?"
Cognitive flexibility: the ability of creating new categories of information rather than keeping old categories. This skill includes opening to new information, taking more than one perspective, and understanding personal ways of interpreting messages and situations.
Tolerance for ambiguity: the ability to maintain focus in situations that are not clear rather than becoming anxious and to methodically determine the best approach as the situation evolves. Generally, low-tolerance individuals look for information that supports their beliefs while high-tolerance individuals look for information that gives an understanding of the situation and others.
Behavioral flexibility: the ability to adapt and accommodate behaviors to a different culture. Although knowing a second language could be important for this skill, it does not necessarily translate into cultural adaptability. The individual must be willing to assimilate the new culture.
Cross-cultural empathy: the ability to visualize with the imagination the situation of another person from an intellectual and emotional point of view. Demonstrating empathy includes the abilities of connecting emotionally with people, showing compassion, thinking in more than one perspective, and listening actively.
Assessment
The assessment of cross-cultural competence is a field that is rife with controversy. One survey identified 86 assessment instruments for 3C. A United States Army Research Institute study narrowed the list down to ten quantitative instruments that were suitable for further exploration of their reliability and validity.
The following characteristics are tested and observed for the assessment of intercultural competence as an existing ability or as the potential to develop it: ambiguity tolerance, openness to contacts, flexibility in behavior, emotional stability, motivation to perform, empathy, metacommunicative competence, and polycentrism. According to Caligiuri, personality traits such as extroversion, agreeableness, conscientiousness, emotional stability, and openness have a favorable predictive value to the adequate termination of cross-cultural assignments.
Quantitative assessment instruments
Three examples of quantitative assessment instruments are:
the Intercultural Development Inventory
the Cultural Intelligence (CQ) Measurement
the Multicultural Personality Questionnaire
Qualitative assessment instruments
Research in the area of 3C assessment, while thin, points to the value of qualitative assessment instruments in concert with quantitative ones. Qualitative instruments, such as scenario-based assessments, are useful for gaining insight into intercultural competence.
Intercultural coaching frameworks, such as the ICCA (Intercultural Communication and Collaboration Appraisal), do not attempt an assessment; they provide guidance for personal improvement based upon the identification of personal traits, strengths, and weaknesses.
Healthcare
The provision of culturally tailored health care can improve patient outcomes. In 2005, California passed Assembly Bill 1195 that requires patient-related continuing medical education courses in California medical school to incorporate cultural and linguistic competence training in order to qualify for certification credits. In 2011, HealthPartners Institute for Education and Research implemented the EBAN Experience™ program to reduce health disparities among minority populations, most notably East African immigrants.
Cross-cultural competence
Cross-cultural competence (3C) has generated confusing and contradictory definitions because it has been studied by a wide variety of academic approaches and professional fields. One author identified eleven different terms that have some equivalence to 3C: cultural savvy, astuteness, appreciation, literacy or fluency, adaptability, terrain, expertise, competency, awareness, intelligence, and understanding. The United States Army Research Institute, which is currently engaged in a study of 3C has defined it as "A set of cognitive, behavioral, and affective/motivational components that enable individuals to adapt effectively in intercultural environments".
Organizations in academia, business, health care, government security, and developmental aid agencies have all sought to use 3C in one way or another. Poor results have often been obtained due to a lack of rigorous study of 3C and a reliance on "common sense" approaches.
Cross-cultural competence does not operate in a vacuum, however. One theoretical construct posits that 3C, language proficiency, and regional knowledge are distinct skills that are inextricably linked, but to varying degrees depending on the context in which they are employed. In educational settings, Bloom's affective and cognitive taxonomies serve as an effective framework for describing the overlapping areas among these three disciplines: at the receiving and knowledge levels, 3C can operate with near-independence from language proficiency and regional knowledge. But, as one approaches the internalizing and evaluation levels, the overlapping areas approach totality.
The development of intercultural competence is mostly based on the individual's experiences while he or she is communicating with different cultures. When interacting with people from other cultures, the individual experiences certain obstacles that are caused by differences in cultural understanding between two people from different cultures. Such experiences may motivate the individual to acquire skills that can help him to communicate his point of view to an audience belonging to a different cultural ethnicity and background.
Intercultural competence models
Intercultural Communicative Language Teaching Model. In response to the needs to develop EFL learners' ICC in the context of Asia, a theoretical framework, which is an instructional design (ISD) model ADDIE with five stages (Analyze – Design – Develop – Implement – Evaluate) is employed as a guideline in order to construct the ICLT model for EFL learners. The ICLT model is an on-going process of ICC acquisition. There are three parts: Language-Culture, the main training process.
(Input – Notice – Practice – Output), and the ICC, which are systematically integrated. The second part is the main part consisting of four teaching steps to facilitate learners' ICC development, and each step reflects a step of the knowledge scaffolding and constructing process to facilitate learners' ICC development.
Immigrants and international students
A salient issue, especially for people living in countries other than their native country, is the issue of which culture they should follow: their native culture or the one in their new surroundings.
International students also face this issue: they have a choice of modifying their cultural boundaries and adapting to the culture around them or holding on to their native culture and surrounding themselves with people from their own country. The students who decide to hold on to their native culture are those who experience the most problems in their university life and who encounter frequent culture shocks. But international students who adapt themselves to the culture surrounding them (and who interact more with domestic students) will increase their knowledge of the domestic culture, which may help them to "blend in" more. In the article it stated, "Segmented assimilation theorists argue that students from less affluent and racial and ethnic minority immigrant families face a number of educational hurdles and barriers that often stem from racial, ethnic, and gender biases and discrimination embedded within the U.S. public school system". Such individuals may be said to have adopted bicultural identities.
Ethnocentrism
Another issue that stands out in intercultural communication is the attitude stemming from ethnocentrism. LeVine and Campbell defines ethnocentrism as people's tendency to view their culture or in-group as superior to other groups, and to judge those groups to their standards. With ethnocentric attitudes, those incapable to expand their view of different cultures could create conflict between groups. Ignorance to diversity and cultural groups contributes to prevention of peaceful interaction in a fast-paced globalizing world. The counterpart of ethnocentrism is ethnorelativism: the ability to see multiple values, beliefs, norms etc. in the world as cultural rather than universal; being able to understand and accept different cultures as equally valid as ones' own. It is a mindset that moves beyond in-group out-group to see all groups as equally important and valid and individuals to be seen in terms of their own cultural context.
Cultural differences
According to Hofstede's cultural dimensions theory, cultural characteristics can be measured along several dimensions. The ability to perceive them and to cope with them is fundamental for intercultural competence. These characteristics include:
Individualism versus collectivism
Collectivism
Decisions are based on the benefits of the group rather than the individual;
Strong loyalty to the group as the main social unit;
The group is expected to take care of each individual;
Collectivist cultures include Pakistan, India, and Guatemala.
Individualism
Autonomy of the individual has the highest importance;
Promotes the exercise of one's goals and desires and so value independence and self-reliance;
Decisions prioritize the benefits of the individual rather than the group;
Individualistic cultures are Australia, Belgium, the Netherlands, and the United States.
Masculinity versus femininity
Masculine Cultures
Value behaviors that indicate assertiveness and wealth;
Judge people based on the degree of ambition and achievement;
General behaviors are associated with male behavior;
Sex roles are clearly defined and sexual inequality is acceptable;
Masculine cultures include Austria, Italy, Japan, and Mexico.
Feminine Cultures
Value behaviors that promote the quality of life such as caring for others and nurturing;
Gender roles overlap and sexual equality is preferred as the norm;
Nurturing behaviors are acceptable for both women and men;
Feminine cultures are Chile, Portugal, Sweden, and Thailand.
Uncertainty avoidance
Reflects the extent to which members of a society attempt to cope with anxiety by minimizing uncertainty;
Uncertainty avoidance dimension expresses the degree to which a person in society feels comfortable with a sense of uncertainty and ambiguity.
High uncertainty avoidance cultures
Countries exhibiting high Uncertainty Avoidance Index or UAI maintain rigid codes of belief and behavior and are intolerant of unorthodox behavior and ideas;
Members of society expect consensus about national and societal goals;
Society ensures security by setting extensive rules and keeping more structure;
High uncertainty avoidance cultures are Greece, Guatemala, Portugal, and Uruguay.
Low uncertainty avoidance cultures
Low UAI societies maintain a more relaxed attitude in which practice counts more than principles;
Low uncertainty avoidance cultures accept and feel comfortable in unstructured situations or changeable environments and try to have as few rules as possible;
People in these cultures are more tolerant of change and accept risks;
Low uncertainty avoidance cultures are Denmark, Jamaica, Ireland, and Singapore.
Power distance
Refers to the degree in which cultures accept unequal distribution of power and challenge the decisions of power holders;
Depending on the culture, some people may be considered superior to others because of a large number of factors such as wealth, age, occupation, gender, personal achievements, and family history.
High power distance cultures
Believe that social and class hierarchy and inequalities are beneficial, that authority should not be challenged, and that people with higher social status have the right to use power;
Cultures with high power distance are Arab countries, Guatemala, Malaysia, and the Philippines.
Low power distance cultures
Believe in reducing inequalities, challenging authority, minimizing hierarchical structures, and using power just when necessary;
Low power distance countries are Austria, Denmark, Israel, and New Zealand.
Short-term versus long-term orientation
Short-term or Monochronic Orientation
Cultures value tradition, personal stability, maintaining "face", and reciprocity during interpersonal interactions
People expect quick results after actions
Historical events and beliefs influence people's actions in the present
Monochronic cultures are Canada, Philippines, Nigeria, Pakistan, and the United States
Long-term or Polychronic Orientation
Cultures value persistence, thriftiness, and humility
People sacrifice immediate gratification for long-term commitments
Cultures believe that past results do not guarantee for the future and are aware of change
Polychronic cultures are China, Japan, Brazil, and India
Criticisms
Although its goal is to promote understanding between groups of individuals that, as a whole, think differently, it may fail to recognize specific differences between individuals of any given group. Such differences can be more significant than the differences between groups, especially in the case of heterogeneous populations and value systems.
Madison has criticized the tendency of 3C training for its tendency to simplify migration and cross-cultural processes into stages and phases.
See also
Allophilia
Anthropologist
Bennett scale
Cross-cultural communication
Cultural assimilation
Cultural behavior
Cultural diversity
Cultural identity
Cultural intelligence
Cultural pluralism
Cultural safety
Existential migration
Adab (Islamic etiquette)
Faux pas
Interaction
Intercultural communication
Intercultural communication principles
Intercultural relations
Interculturalism
Interpersonal communication
Montreal–Philippines cutlery controversy
Multiculturalism
Proxemics
Purnell Model for Cultural Competence
Social constructionism
Social identity
Transculturation
Worldwide etiquette
Xenocentrism
Footnotes
References
Groh, Arnold A. (2018) Research Methods in Indigenous Contexts. Springer, New York.
Hayunga, E.G., Pinn, V.W. (1999) NIH Policy on the Inclusion of Women and Minorities as Subjects in Clinical Research. 5-17-99
Macaulay, A.C., el. al. (1999) Responsible Research with Communities: Participatory Research in Primary Care. North America Primary Care Research Group Policy Statement.
Mercedes Martin & Billy E. Vaughn (2007). Strategic Diversity & Inclusion Management magazine, pp. 31–36. DTUI Publications Division: San Francisco, CA.
Moule, Jean (2012). Cultural Competence: A primer for educators. Wadsworth/Cengage, Belmont, California.
Nine-Curt, Carmen Judith. (1984) Non-verbal Communication in Puerto Rico. Cambridge, Massachusetts.
Sea, M.C., et al. (1994) Latino Cultural Values: Their Role in Adjustment to Disability. Psychological Perspectives on Disability. Select Press CA
Stavans, I. (1995) The Hispanic Condition: Reflections on Culture and Identity in America. HarperCollins
External links
National Center for Cultural Competence at Georgetown University
National Association of School Psychologists
Achieving Cultural Competence guidebook from Administration on Aging, Department of Health and Human Services, United States
University of Michigan Program For Multicultural Health
Cross Cultural Health Care Program
What is the Cost of Intercultural Silence?
Cultural anthropology
Cultural geography
Cultural studies
Etiquette
Human communication
Cultural politics
Cultural competence
Interculturalism
Ethnicity | 0.769521 | 0.987797 | 0.760131 |
Ethical leadership | Ethical leadership is leadership that is directed by respect for ethical beliefs and values and for the dignity and rights of others. It is thus related to concepts such as trust, honesty, consideration, charisma, and fairness.
Ethics is concerned with the kinds of values and morals an individual or a society finds desirable or appropriate. Furthermore, ethics is concerned with the virtuousness of individuals and their motives. A leader's choices are also influenced by their moral development.
Theory
Social learning theory
According to social learning theory ethical leaders acts as role models for their followers. Behavior, such as following ethical practices and taking ethical decisions, are observed, and consequently followed. Rewards and punishments given out by the leader create a second social learning opportunity, that teaches which behavior is acceptably and which is not.
Social exchange theory
In social exchange theory the effect of ethical leadership on followers is explained by transactional exchanges between the leader and their followers. The leader's fairness and caring for followers activates a reciprocatory process, in which the followers act in the same manner towards the leader.
Operationalization
A commonly used measure of ethical leadership is the Ethical Leadership Scale (ELS), developed by Brown et al. in 2005. It consists of 10 items with an internal consistency of alpha = .92 and shows a satisfying fit, with indices at or above recommended standards.
Other scales include the Ethical Leadership at Work Questionnaire proposed by Kalshoven et al. with 38 Items and the Ethical Leadership Questionnaire (ELQ), composed of 15 Items and proposed by Yukl et al. in 2013.
Comparison to other leadership styles
Though conceptionally close to and partly overlapping with other leadership styles such as transformational leadership, spiritual leadership and authentic leadership, ethical leadership nonetheless describes a unique leadership style with noticeable differences. The most apparent differentiating feature is ethical leadership's focus on the setting of moral standards and moral management, which sets it apart from transformation leadership's focus on vision and values and spiritual leadership's focus on hope and faith. Additionally, the nature of ethical leadership lies in the awareness of others, and not of the self, differentiating it clearly from authentic leadership.
References
Further reading
Reilly, E. C. (2006). The future entering: Reflections on and challenges to ethical leadership. Educational Leadership and Administration, 18, 163-173
McQueeny, E. (2006). Making ethics come alive. Business Communication Quarterly, 69(2), 158-170
Wee, H. (2002). Corporate ethics: right makes might. Business Week Online
Stansbury, J.(2009). Reasoned moral agreement: applying discourse ethics within organizations. Business Ethics Quarterly. 19(1), 33-56
Seidman, D. (2010). Ethical leadership: an operating manual. Bloomberg Business Week 10, 1-2
Leadership
Professional ethics | 0.777042 | 0.978156 | 0.760068 |
Gender and development | Gender and development is an interdisciplinary field of research and applied study that implements a feminist approach to understanding and addressing the disparate impact that economic development and globalization have on people based upon their location, gender, class background, and other socio-political identities. A strictly economic approach to development views a country's development in quantitative terms such as job creation, inflation control, and high employment – all of which aim to improve the ‘economic wellbeing’ of a country and the subsequent quality of life for its people. In terms of economic development, quality of life is defined as access to necessary rights and resources including but not limited to quality education, medical facilities, affordable housing, clean environments, and low crime rate. Gender and development considers many of these same factors; however, gender and development emphasizes efforts towards understanding how multifaceted these issues are in the entangled context of culture, government, and globalization. Accounting for this need, gender and development implements ethnographic research, research that studies a specific culture or group of people by physically immersing the researcher into the environment and daily routine of those being studied, in order to comprehensively understand how development policy and practices affect the everyday life of targeted groups or areas.
The history of this field dates back to the 1950s, when studies of economic development first brought women into its discourse, focusing on women only as subjects of welfare policies – notably those centered on food aid and family planning. The focus of women in development increased throughout the decade, and by 1962, the United Nations General Assembly called for the Commission on the Status of Women to collaborate with the Secretary General and a number of other UN sectors to develop a longstanding program dedicated to women's advancement in developing countries. A decade later, feminist economist Ester Boserup’s pioneering book Women’s Role in Economic Development (1970) was published, radically shifting perspectives of development and contributing to the birth of what eventually became the gender and development field.
Since Boserup's consider that development affects men and women differently, the study of gender's relation to development has gathered major interest amongst scholars and international policymakers. The field has undergone major theoretical shifts, beginning with Women in Development (WID), shifting to Women and Development (WAD), and finally becoming the contemporary Gender and Development (GAD). Each of these frameworks emerged as an evolution of its predecessor, aiming to encompass a broader range of topics and social science perspectives. In addition to these frameworks, international financial institutions such as the World Bank and the International Monetary Fund (IMF) have implemented policies, programs, and research regarding gender and development, contributing a neoliberal and smart economics approach to the study. Examples of these policies and programs include Structural Adjustment Programs (SAPs), microfinance, outsourcing, and privatizing public enterprises, all of which direct focus towards economic growth and suggest that advancement towards gender equality will follow. These approaches have been challenged by alternative perspectives such as Marxism and ecofeminism, which respectively reject international capitalism and the gendered exploitation of the environment via science, technology, and capitalist production. Marxist perspectives of development advocate for the redistribution of wealth and power in efforts to reduce global labor exploitation and class inequalities, while ecofeminist perspectives confront industrial practices that accompany development, including deforestation, pollution, environmental degradation, and ecosystem destruction.
Gender Roles in Childhood Development
Introduction
Gender identity formation in early childhood is an important aspect of child development, shaping how individuals see themselves and others in terms of gender (Martin & Ruble, 2010). It encompasses the understanding and internalization of societal norms, roles, and expectations associated with a specific gender. As time progresses, there becomes more outlets for these gender roles to be influenced due to the increase outlets of new media. This developmental process begins early and is influenced by various factors, including socialization, cultural norms, and individual experiences. Understanding and addressing gender roles in childhood is essential for promoting healthy identity development and fostering gender equity (Martin & Ruble, 2010).
Observations of Gender Identity Formation
Educators have made abundant observations regarding children's expression of gender identity. From a earlier age, children absorb information about gender from various sources, including family, peers, media, and societal norms (Halim, Ruble, Tamis-LeMonda, & Shrout, 2010). These influences shape their perceptions and behaviors related to gender, leading them to either conform to or challenge gender stereotypes. An example could be when children may exhibit preferences for certain toys, activities, or clothing based on societal expectations associated with their perceived gender because that is what was handed to them or what was made okay from an authority figure, establishing a baseline.
Teacher Research
Teacher research plays a crucial role in understanding gender roles in childhood development. Educators often are able to see similarities in children's behavior that reflect societal gender norms, such as boys moving towards rough play or girls engaging in nurturing activities (Solomon, 2016). These observations prompt more investigation into the factors contributing to these behaviors, including the classroom materials, teacher expectations, and social interactions by examining these factors, educators can gain insights into how gender stereotypes are perpetuated and explore strategies to promote gender equity in the classroom. Since teachers have the educational background of learning about and seeing these developments, it allows them to be great researchers in this subject category.
Influence of Materials and Teacher Expectations
The materials provided in the classroom and the requirements established by teachers can influence children's behavior and interactions (Solomon, 2016). For instance, offering a diverse range of toys, books, and activities can help encourage these children to explore interests outside of traditional gender roles that are trying to be established by external sources (Martin & Ruble, 2013). Also, creating an environment where all children feel valued regardless of gender can help challenge stereotypes and promote ideal socialization experiences. By being aware of the materials and messages conveyed in the classroom, educators can create an environment that fosters gender diversity and empowers children to express themselves authentically (Solomon 2016).
Children's Desire and Search for Power
Children actively seek/express power in interactions with others, often coming upon their understanding of gender idealistic. For example, they may use knowledge of gender norms to assert authority or control over others, such as excluding others from being able to participate in a game because of a gender stereotype like girls cannot play sports game or games that include rough play. These behaviors show children's attempts to sift through social hierarchies and establish identities within the context of expectations. By recognizing and addressing these dynamics, educators can promote more inclusive and equitable interactions among children.
Early Acquisition of Gender Roles
Children begin to internalize gender roles from a young age, often as early as infancy. By preschool age, many children have developed some form of understanding on gender stereotypes and expectations (King, 2021). These stereotypes are established through various sources, including family, friends, media outlets, and cultural ideals, shaping children's understanding and behaviors related to gender. Education systems, parental influence, and media and store influence can contribute as many of these influences associated different colors with different genders, different influential figures, as well as different toys that are supposed to cater to a specific gender.
Expressions and Behavior Reflecting Gender Development
Children's expressions provide insights into their changing understanding of gender roles and relationships. However, it is necessary to be able to demonstrate processes of emotional regulation in situations where the individual needs an adjustment of the emotional response of larger intensity (Sanchis et. al 2020). Some children can develop stern understandings about gender stereotypes, showing a bias or discrimination towards those who do not conform to these norms. Educators play a role in counteracting these beliefs by providing opportunities for reflection and promoting empathy and respect for diverse gender identities (Martin & Ruble, 2010).
Educational Strategies
In conclusion, promoting gender equity and challenging traditional gender roles in early childhood takes additional intentional educational strategies. This includes implementing multi-gendered activities, giving examples diverse role models, and offering open-ended materials for activity that encourage creativity (Martin & Ruble, 2010). By creating inclusive learning environments that affirm and celebrate gender diversity, researchers and individuals can support children in developing healthy and positive identities that transcend narrow stereotypes and promote social justice.
Early approaches
Women in development (WID)
Theoretical approach
The term “women in development” was originally coined by a Washington-based network of female development professionals in the early 1970s who sought to question trickle down existing theories of development by contesting that economic development had identical impacts on men and women. The Women in Development movement (WID) gained momentum in the 1970s, driven by the resurgence of women's movements in developed countries, and particularly through liberal feminists striving for equal rights and labour opportunities in the United States. Liberal feminism, postulating that women's disadvantages in society may be eliminated by breaking down customary expectations of women by offering better education to women and introducing equal opportunity programmes, had a notable influence on the formulation of the WID approaches.
The focus of the 1970s feminist movements and their repeated calls for employment opportunities in the development agenda meant that particular attention was given to the productive labour of women, leaving aside reproductive concerns and social welfare. This approach was pushed forward by WID advocates, reacting to the general policy environment maintained by early colonial authorities and post-war development authorities, wherein inadequate reference to the work undertook by women as producers was made, as they were almost solely identified as their roles as wives and mothers. The WID's opposition to this “welfare approach” was in part motivated by the work of Danish economist Ester Boserup in the early 1970s, who challenged the assumptions of the said approach and highlighted the role women by women in the agricultural production and economy.
Reeves and Baden (2000) point out that the WID approach stresses the need for women to play a greater role in the development process. According to this perspective, women's active involvement in policymaking will lead to more successful policies overall. Thus, a dominant strand of thinking within WID sought to link women's issues with development, highlighting how such issues acted as impediments to economic growth; this “relevance” approach stemmed from the experience of WID advocates which illustrated that it was more effective if demands of equity and social justice for women were strategically linked to mainstream development concerns, in an attempt to have WID policy goals taken up by development agencies. The Women in Development approach was the first contemporary movement to specifically integrate women in the broader development agenda and acted as the precursor to later movements such as the Women and Development (WAD), and ultimately, the Gender and Development approach, departing from some of the criticized aspects imputed to the WID.
Criticism
The WID movement faced a number of criticisms; such an approach had in some cases the unwanted consequence of depicting women as a unit whose claims are conditional on its productive value, associating increased female status with the value of cash income in women's lives. The WID view and similar classifications based on Western feminism, applied a general definition to the status, experiences and contributions of women and the solutions for women in Third World countries. Furthermore, the WID, although it advocated for greater gender equality, did not tackle the unequal gender relations and roles at the basis of women's exclusion and gender subordination rather than addressing the stereotyped expectations entertained by men. Moreover, the underlying assumption behind the call for the integration of the Third World women with their national economy was that women were not already participating in development, thus downplaying women's roles in household production and informal economic and political activities. The WID was also criticized for its views on the fact that women's status will improve by moving into “productive employment”, implying that the move to the “modern sector” need to be made from the “traditional” sector to achieve self-advancement, further implying that “traditional” work roles often occupied by women in the developing world were inhibiting to self-development.
Women and development (WAD)
Women and development (WAD) is a theoretical and practical approach to development. It was introduced into gender studies scholarship in the second half of the 1970s, following its origins, which can be traced to the First World Conference on Women in Mexico City in 1975, organized by the UN. It is a departure from the previously predominant theory, WID (Women in Development) and is often mistaken for WID, but has many distinct characteristics.
Theoretical approach
WAD arose out of a shift in thinking about women's role in development, and concerns about the explanatory limitations of modernization theory. While previous thinking held that development was a vehicle to advance women, new ideas suggested that development was only made possible by the involvement of women, and rather than being simply passive recipients of development aid, they should be actively involved in development projects. WAD took this thinking a step further and suggested that women have always been an integral part of development, and did not suddenly appear in the 1970s as a result of exogenous development efforts. The WAD approach suggests that there be women-only development projects that were theorized to remove women from the patriarchal hegemony that would exist if women participated in development alongside men in a patriarchal culture, though this concept has been heavily debated by theorists in the field. In this sense, WAD is differentiated from WID by way of the theoretical framework upon which it was built. Rather than focus specifically on women's relationship to development, WAD focuses on the relationship between patriarchy and capitalism. This theory seeks to understand women's issues from the perspectives of neo-Marxism and dependency theory, though much of the theorizing about WAD remains undocumented due to the persistent and pressing nature of development work in which many WAD theorists engage.
Practical approach
The WAD paradigm stresses the relationship between women, and the work that they perform in their societies as economic agents in both the public and domestic spheres. It also emphasizes the distinctive nature of the roles women play in the maintenance and development of their societies, with the understanding that purely the integration of women into development efforts would serve to reinforce the existing structures of inequality present in societies overrun by patriarchal interests. In general, WAD is thought to offer a more critical conceptualization of women's position compared to WID.
The WAD approach emphasizes the distinctive nature of women's knowledge, work, goals, and responsibilities, as well as advocating for the recognition of their distinctiveness. This fact, combined with a recognized tendency for development agencies to be dominated by patriarchal interests, is at the root of the women-only initiatives introduced by WAD subscribers.
Criticism
Some of the common critiques of the WAD approach include concerns that the women-only development projects would struggle, or ultimately fail, due to their scale, and the marginalized status of these women. Furthermore, the WAD perspective suffers from a tendency to view women as a class, and pay little attention to the differences among women (such as feminist concept of intersectionality), including race and ethnicity, and prescribe development endeavors that may only serve to address the needs of a particular group. While an improvement on WID, WAD fails to fully consider the relationships between patriarchy, modes of production, and the marginalization of women. It also presumes that the position of women around the world will improve when international conditions become more equitable. Additionally, WAD has been criticized for its singular preoccupation with the productive side of women's work, while it ignores the reproductive aspect of women's work and lives. Therefore, WID/WAD intervention strategies have tended to concentrate on the development of income-generating activities without taking into account the time burdens that such strategies place on women. Value is placed on income-generating activities, and none is ascribed to social and cultural reproduction.
Gender and development (GAD)
Theoretical approach
The Gender and Development (GAD) approach focuses on the socially constructed differences between men and women, the need to challenge existing gender roles and relations, and the creation and effects of class differences on development. This approach was majorly influenced by the writings of academic scholars such as Oakley (1972) and Rubin (1975), who argue the social relationship between men and women have systematically subordinated women, along with economist scholars Lourdes Benería and Amartya Sen (1981), who assess the impact of colonialism on development and gender inequality. They state that colonialism imposed more than a 'value system' upon developing nations, it introduced a system of economics 'designed to promote capital accumulation which caused class differentiation'.
GAD departs from WID, which discussed women's subordination and lack of inclusion in discussions of international development without examining broader systems of gender relations. Influenced by this work, by the late 1970s, some practitioners working in the development field questioned focusing on women in isolation. GAD challenged the WID focus on women as an important ‘target group’ and ‘untapped resources’ for development. GAD marked a shift in thinking about the need to understand how women and men are socially constructed and how ‘those constructions are powerfully reinforced by the social activities that both define and are defined by them.’ GAD focuses primarily on the gendered division of labor and gender as a relation of power embedded in institutions. Consequently, two major frameworks, ‘Gender roles’ and ‘social relations analysis’, are used in this approach. 'Gender roles' focuses on the social construction of identities within the household; it also reveals the expectations from ‘maleness and femaleness’ in their relative access to resources. 'Social relations analysis' exposes the social dimensions of hierarchical power relations embedded in social institutions, as well as its determining influence on ‘the relative position of men and women in society.’ This relative positioning tends to discriminate against women.
Unlike WID, the GAD approach is not concerned specifically with women, but with the way in which a society assigns roles, responsibilities and expectations to both women and men. GAD applies gender analysis to uncover the ways in which men and women work together, presenting results in neutral terms of economics and efficiency. In an attempt to create gender equality (denoting women having the same opportunities as men, including ability to participate in the public sphere), GAD policies aim to redefine traditional gender role expectations. Women are expected to fulfill household management tasks, home-based production as well as bearing and raising children and caring for family members. In terms of children, they develop social constructions through observations at a younger age than most people think. Children tend to learn about the differences between male and female actions and objects of use in a specific culture of their environment through observation (Chung & Huang 2021). Around three years old, children learn about stability of gender and demonstrate stereotyping similar to adults regarding toys, clothes, activities, games, colors, and even specific personality descriptions. (2021). By five years of age, they begin to develop identity and to possess stereotyping of personal–social attributes (2021). At that age of their life, children think that they are more similar to their same-gender peers and are likely to compare themselves with characteristics that fit the gender stereotype. After entering primary school, children’s gender stereotyping extends to more dimensions, such as career choices, sports, motives to learn subjects which has an impact on the cognition of individuals (2021). The role of a wife is largely interpreted as 'the responsibilities of motherhood.' Men, however, are expected to be breadwinners, associated with paid work and market production. In the labor market, women tend to earn less than men. For instance, 'a study by the Equality and Human Rights Commission found massive pay inequities in some United Kingdom's top finance companies, women received around 80 percent less performance-related pay than their male colleagues.' In response to pervasive gender inequalities, Beijing Platform for Action established gender mainstreaming in 1995 as a strategy across all policy areas at all levels of governance for achieving gender equality.
GAD has been largely utilized in debates regarding development but this trend is not seen in the actual practice of developmental agencies and plans for development. Caroline Moser claims WID persists due to the challenging nature of GAD, but Shirin M. Rai counters this claim noting that the real issue lies in the tendency to overlap WID and GAD in policy. Therefore, it would only be possible if development agencies fully adopted GAD language exclusively. Caroline Moser developed the Moser Gender Planning Framework for GAD-oriented development planning in the 1980s while working at the Development Planning Unit of the University of London. Working with Caren Levy, she expanded it into a methodology for gender policy and planning.
The Moser framework follows the Gender and Development approach in emphasizing the importance of gender relations.
As with the WID-based Harvard Analytical Framework, it includes a collection of quantitative empirical facts. Going further, it investigates the reasons and processes that lead to conventions of access and control.
The Moser Framework includes gender roles identification, gender needs assessment, disaggregating control of resources and decision making within the household, planning for balancing work and household responsibilities, distinguishing between different aims in interventions and involving women and gender-aware organizations in planning.
Criticism
GAD has been criticized for emphasizing the social differences between men and women while neglecting the bonds between them and also the potential for changes in roles. Another criticism is that GAD does not dig deeply enough into social relations and so may not explain how these relations can undermine programs directed at women. It also does not uncover the types of trade-offs that women are prepared to make for the sake of achieving their ideals of marriage or motherhood. Another criticism is that the GAD perspective is theoretically distinct from WID, but in practice, programs seem to have elements of both. Whilst many development agencies are now committed to a gender approach, in practice, the primary institutional perspective remain focused on a WID approach. Specifically, the language of GAD has been incorporated into WID programs. There is a slippage in reality where gender mainstreaming is often based in a single normative perspective as synonymous to women. Development agencies still advance gender transformation to mean economic betterment for women. Further criticisms of GAD is its insufficient attention to culture, with a new framework being offered instead: Women, Culture and Development (WCD). This framework, unlike GAD, wouldn't look at women as victims but would rather evaluate the Third World life of women through the context of the language and practice of gender, the Global South, and culture.
Neoliberal approaches
Gender and neoliberal development institutions
Neoliberalism consists of policies that will privatize public industry, deregulate any laws or policies that interfere with the free flow of the market and cut back on all social services. These policies were often introduced to many low-income countries through structural adjustment programs (SAPs) by the World Bank and the International Monetary Fund (IMF). Neoliberalism was cemented as the dominant global policy framework in the 1980s and 1990s. Among development institutions, gender issues have increasingly become part of economic development agendas, as the examples of the World Bank shows. Awareness by international organizations of the need to address gender issues evolved over the past decades. The World Bank, and regional development banks, donor agencies, and government ministries have provided many examples of instrumental arguments for gender equality, for instance by emphasizing the importance of women's education as a way of increasing productivity in the household and the market. Their concerns have often focused on women's contributions to economic growth rather than the importance of women's education as a means for empowering women and enhancing their capabilities. The World Bank, for example, started focusing on gender in 1977 with the appointment of a first Women in Development Adviser. In 1984 the bank mandated that its programs consider women's issues. In 1994 the bank issued a policy paper on Gender and Development, reflecting current thinking on the subject. This policy aims to address policy and institutional constraints that maintain disparities between the genders and thus limit the effectiveness of development program. Thirty years after the appointment of a first Women in Development Adviser, a so-called Gender Action Plan was launched to underline the importance of the topic within development strategies and to introduce the new Smart Economics strategy.
Gender mainstreaming mandated by the 1995 Beijing Platform for action integrates gender in all aspects of individuals lives in regards to policy development on gender equality. The World Bank's Gender Action Plan of 2007-10 is built upon the Bank's gender mainstreaming strategy for gender equality. The Gender Action Plan's objective was advance women's economic empowerment through their participation in land, labor, financial and product markets. In 2012, the World Development Report was the first report of the series examining Gender Equality and Development. Florika Fink-Hooijer, head of the European Commission's Directorate-General for European Civil Protection and Humanitarian Aid Operations introduced cash-based aid as well as gender and age sensitive aid.
An argument made on the functions behind institutional financial institutions such as the International Monetary Fund (IMF) and the World Bank are that they support capitalist ideals through their means of economic growth of countries globally and their participation in the global economy and capitalist systems. The roles of banks as institutions and the creation of new workers’ economy reflect neoliberal developing ideals is also present in the criticisms on neoliberal developing institutions. Another critique made on the market and institutions is that it contributes to the creation of policies and aid with gender-related outcomes. An argument made on the European Bank for Reconstruction and Development is that it creates a neoliberal dominance that continues the construction and reconstruction of gender norms by homogenously category women rather than the gender disparities within its policies.
Gender and outsourcing
One of the features of development encouraged in neoliberal approaches is outsourcing. Outsourcing is when companies from the western world moves some of their business to another country. The reasons these companies make the decision to move is often because of cheap labor costs. Although outsourcing is about businesses it is directly related to gender because it has greatly affected women. The reason it is related to gender is that women are mainly the people that are being hired for these cheap labor jobs and why they are being hired.
One example of a popular place for factories to relocate is to China. In China the main people who work in these factories are women, these women move from their home towns to cities far away for the factory jobs. The reasons these women move is to be able to make a wage to take care of not only themselves but their families as well. Oftentimes these women are expected to get these jobs.
Another example of a country the garment industry outsources work to is Bangladesh, which has one of the lowest costs of labor compared to other third world countries (see the ILO data provided in figure 1). With low labor costs, there is also poor compliance with labor standards in the factories. The factory workers in Bangladesh can experience several types of violations of their rights. These violations include: long working hours with no choice but to work overtime, deductions to wages, as well as dangerous and unsanitary working conditions.
Although the discussions made around outsourcing do not often involve the effects on women, women daily endure constant results from it. Women in countries and areas that may not have been able to work and make their own income now have the opportunity to provide for themselves and their kids. Gender is brought to attention because unemployment is sometimes a threat to women. The reason for it being a threat is because without jobs and their own income women may fall victim to discrimination or abuse. It is very valuable to many women to be able to obtain their own source of income, outsourcing allows women in countries that may not easily obtain a job the opportunity to obtain jobs. Many times factory owners discuss how many women want the jobs they have to offer.
With the availability of jobs and the seeming benefits comes a concern for the work conditions in these outsourced jobs. Although some women have acquired a job the work conditions may not be safe or ideal. As mentioned above the jobs are in extreme demand because of how limited opportunities for employment is in certain regions. This leads to the idea of women being disposable at the workplace. As a result of this the workers in these factories do not have room to complain. They also are not able to expect safe working conditions in their work environments. Women have to move far from their hometowns and families to work at these factory jobs. The hours are long and because they are not home they typically also move into dormitories and live at their jobs.
Gender and microfinance
Women have been identified by some development institutions as a key to successful development, for example through financial inclusion. Microcredit is giving small loans to people in poverty without collateral. This was first started by Muhammad Yunus, who formed the Grameen Bank in Bangladesh. Studies have showed that women are more likely to repay their debt than men, and the Grameen Bank focuses on aiding women. This financial opportunity allows women to start their own businesses for a steady income. Women have been the focus of microcredit for their subsequent increased status as well as the overall well-being of the home being improved when given to women rather than men.
There were numerous case studies done in Tanzania about the correlation of the role of SACCoS (savings and credit cooperative organization) and the economic development of the country. The research showed that the microfinance policies were not being carried out in the most efficient ways due to exploitation. One case study went a step further to claim that this financial service could provide a more equal society for women in Tanzania.
While there are such cases in which women were able to lift themselves out of poverty, there are also cases in which women fell into a poverty trap as they were unable to repay their loans. It is even said that microcredit is actually an "anti-developmental" approach. There is little evidence of significant development for these women within the 30 years that the microfinance has been around. In South Africa, unemployment is high due to the introduction of microfinance, more so than it was under apartheid. Microcredit intensified poverty in Johannesburg, South Africa as poor communities, mostly women, who needed to repay debt were forced to work in the informal sector.
Some arguments that microcredit is not effective insist that the structure of the economy, with large informal and agriculture sectors, do not provide a system in which borrowers can be successful. In Nigeria, where the informal economy is approximately 45–60% of economy, women working within it could not attain access to microcredit because of the high demand for loans triggered by high unemployment rates in the formal sector. This study found Nigerian woman are forced into “the hustle” and enhanced risk of the informal economy, which is unpredictable and contributes to women's inability to repay the loans. Another example from a study conducted in Arampur, Bangladesh, found that microcredit programs within the agrarian community do not effectively help the borrower pay their loan because the terms of the loan are not compatible with farm work. If was found that MFIs force borrowers to repay before the harvesting season starts and in some cases endure the struggles of sharecropping work that is funded by the loan.
Although there is debate on how effective microcredit is in alleviating poverty in general, there is an argument that microcredit enables women to participate and fulfill their capabilities in society. For example, a study conducted in Malayasia showed that their version of microcredit, AIM, had a positive effect on Muslim women's empowerment in terms of allowing them to have more control over family planning and over decisions that were made in the home.
In contrast, out of a study conducted on 205 different MFIs, they concluded that there is still gender discrimination within microfinance institutions themselves and microcredit which impact the existing discrimination within communities as well. In Bangladesh, another outcome seen for some of the Grameen recipients was that they faced domestic abuse as a result of their husbands feeling threatened about women bringing in more income. A study in Uganda also noted that men felt threatened through increased female financial dominance, increasing women's vulnerability at home.
Through the “constructivist feminist standpoint,” women can understand that the limitations they face are not inherent and in fact, are “constructed” by traditional gender roles, which they have the ability to challenge through owning their own small business. Through this focus, a study focused on the Foundation for International Community Assistance's (FINCA) involvement and impact in Peru, where women are made aware of the “machismo” patriarchal culture in which they live through their experiences with building small enterprises. In Rajasthan, India, another study found mixed results for women participating in a microlending program. Though many women were not able to pay back their loans, many were still eager to take on debt because their microfinance participation created a platform to address other inequities within the community.
Another example is the Women's Development Business (WDB) in South Africa, a Grameen Bank microfinance replicator. According to WDB, the goal is to ensure “[…] that rural women are given the tools to free themselves from the chains of poverty […]” through allocation of financial resources directly to women including enterprise development programs. The idea is to use microfinance as a market-oriented tool to ensure access to financial services for disadvantaged and low-income people and therefore fostering economic development through financial inclusion.
Diving into another example regarding Microfinance and women from Women Entrepreneurship Promotion in Developing Countries: What explains the gender gap in entrepreneurship and how to close it?is Vossenberg (2013) describes how although there has been an increase in entrepreneurship for women, the gender gap still persists. The author states “The gender gap is commonly defined as the difference between men and women in terms of numbers engaged in entrepreneurial activity, motives to start or run a business, industry choice and business performance and growth” (Vossenberg, 2). The article dives into how in Eastern Europe there is a low rate of women entrepreneurs. Although the author discusses how in Africa nearly fifty percent of women make up entrepreneurs.
As a reaction, a current topic in the feminist literature on economic development is the ‘gendering’ of microfinance, as women have increasingly become the target borrowers for rural microcredit lending. This, in turn, creates the assumption of a “rational economic woman” which can exacerbate existing social hierarchies).
Therefore, the critique is that the assumption of economic development through microfinance does not take into account all possible outcomes, especially the ones affecting women.
The impact of programs of the Bretton Woods Institutions and other similar organizations on gender are being monitored by Gender Action, a watchdog group founded in 2002 by Elaine Zuckerman who is a former World Bank economist.
Gender, financial crises, and neoliberal economic policy
The Great Recession and the following politics of austerity have opened up a wide range of gender and feminist debates on neoliberalism and the impact of the crisis on women. One view is that the crisis has affected women disproportionately and that there is a need for alternative economic structures in which investment in social reproduction needs to be given more weight. The International Labour Organization (ILO) assessed the impact of the Great Recession on workers and concluded that while the crisis initially affected industries that were dominated by male workers (such as finance, construction and manufacturing) it then spread over to sectors in which female workers are predominantly active. Examples for these sectors are the service sector or wholesale-retail trade.
There are different views among feminists on whether neoliberal economic policies have more positive or negative impacts on women. In the post-war era, feminist scholars such as Elizabeth Wilson criticized state capitalism and the welfare state as a tool to oppress women. Therefore, neoliberal economic policies featuring privatization and deregulation, hence a reduction of the influence of the state and more individual freedom was argued to improve conditions for women. This anti-welfare state thinking arguably led to feminist support for neoliberal ideas embarking on a macroeconomic policy level deregulation and a reduced role of the state.
Therefore, some scholars in the field argue that feminism, especially during its second wave, has contributed key ideas to Neoliberalism that, according to these authors, creates new forms of inequality and exploitation.
As a reaction to the phenomenon that some forms of feminism are increasingly interwoven with capitalism, many suggestions on how to name these movements have emerged in the feminist literature. Examples are ‘free market feminism’ or even ‘faux-feminism’.
Smart economics
Theoretical approaches
Advocated chiefly by the World Bank, smart economics is an approach to define gender equality as an integral part of economic development and it aims to spur development through investing more efficiently in women and girls. It stresses that the gap between men and women in human capital, economic opportunities, and voice/agency is a chief obstacle in achieving more efficient development. As an approach, it is a direct descendant of the efficiency approach taken by WID which “rationalizes ‘investing’ in women and girls for more effective development outcomes.” As articulated in the section of WID, the efficiency approach to women in development was chiefly articulated by Caroline Moser in the late 1980s. Continuing the stream of WID, smart economics’ key unit of analysis is women as individual and it particularly focuses on measures that promote to narrow down the gender gap. Its approach identifies women are relatively underinvested source of development and it defines gender equality an opportunity of higher return investment. “Gender equality itself is here depicted as smart economics, in that it enables women to contribute their utmost skills and energies to the project of world economic development.” In this term, smart economics champions neoliberal perspective in seeing business as a vital vehicle for change and it takes a stance of liberal feminism.
The thinking behind smart economics dates back, at least, to the lost decade of the Structural Adjustment Policies (SAPs) in the 1980s. In 1995, World Bank issued its flagship publication on gender matters of the year Enhancing Women's Participation in Economic Development (World Bank 1995). This report marked a critical foundation to the naissance of Smart Economics; in a chapter entitled ‘The Pay-offs to Investing in Women,’ the Bank proclaimed that investing in women “speeds economic development by raising productivity and promoting the more efficient use of resources; it produces significant social returns, improving child survival and reducing fertility, and it has considerable intergenerational pay-offs.” The Bank also emphasized its associated social benefits generated by investing in women. For example, the Bank turned to researches of Whitehead that evidenced a greater female-control of household income is associated with better outcomes for children's welfare and Jeffery and Jeffery who analyzed the positive correlation between female education and lower fertility rates. In the 2000s, the approach of smart economics came to be further crystallized through various frameworks and initiatives. A first step was World Bank's Gender Action Plan (GAP) 2007-/2010, followed by the “Three Year Road Map for Gender Mainstreaming 2010-13.” The 2010-13 framework responded to criticisms for its precursor and incorporated some shifts in thematic priorities. Lastly but not least, the decisive turning point was 2012 marked by its publication of “World Development Report 2012: Gender Equality and Development.” This Bank's first comprehensive focus on the gender issues was welcomed by various scholars and practitioners, as an indicator of its seriousness. For example, Shahra Razavi appraised the report as ‘a welcome opportunity for widening the intellectual space’.
Other international organizations, particular UN families, have so far endorsed the approach of smart economics. Examining the relationship between child well-being and gender equality, for example, UNICEF also referred to the “Double Dividend of Gender Equality.” Its explicit link to a wider framework of the Millennium Development Goals (where the Goal 3 is Promoting Gender Equality and Women's Empowerment) claimed a wider legitimacy beyond economic efficiency. In 2007, the Bank proclaimed that “The business case for investing in MDG 3 is strong; it is nothing more than smart economics.” In addition, “Development organisations and governments have been joined in this focus on the ‘business case’ for gender equality and the empowerment of women, by businesses and enterprises which are interested in contributing to social good.” A good example is “Girl Effect initiative” taken by Nike Foundation. Its claim for economic imperative and a broader socio-economic impact also met a strategic need of NGOs and community organizations that seeks justification for their program funding. Thus, some NGOs, for example Plan International, captured this trend to further their program. The then-president of the World Bank Robert B. Zoellick was quoted by Plan International in stating “Investing in adolescent girls is precisely the catalyst poor countries need to break intergenerational poverty and to create a better distribution of income. Investing in them is not only fair, it is a smart economic move.” The Great Recession and austerity measures taken by major donor counties further supported this approach, since international financial institutions and international NGOs received a greater pressure from donors and from global public to design and implement maximally cost-effective programs.
Criticisms
From the mid-2000s, the approach of smart economics and its chief proponent –World Bank– met a wide range of criticisms and denouncements. These discontents can be broadly categorized into three major claims; Subordination of Intrinsic Value; Ignorance for the need of systemic transformation; Feminisation of responsibility; Overemphasized efficiency; and Opportunistic pragmatism. This is not exhaustive list of criticisms, but the list aims to highlight different emphasis among existing criticisms.
The World Bank's gender policy aims to eliminate poverty and enhance economic growth by addressing gender disparities and inequalities that hinders development. A critique on the World Bank's gender policy is it being ‘gender-blind’ and not properly addressing gender inequity. Rather a critique made is that the World Bank's gender policy utilizes gender equality as an ends means rather than analyzing root causes for economic disparities and gender equity.
Smart economics’ subordination of women under the justification of development invited fierce criticisms. Chant expresses her grave concern that “Smart economics is concerned with building women’s capacities in the interests of development rather than promoting women’s rights for their own sake.” She disagrees that investment in women should be promoted by its instrumental utility: “it is imperative to ask whether the goal of female investment is primarily to promote gender equality and women’s ‘empowerment’, or to facilitate development ‘on the cheap’, and/or to promote further economic liberalization.” Although smart economics outlines that gender equality has intrinsic value (realizing gender equality is an end itself) and instrumental value (realizing gender equality is a means to a more efficient development), many points out that the Bank pays almost exclusive attentions to the latter in defining its framework and strategy. Zuckerman also echoed this point by stating “business case [which] ignores the moral imperative of empowering women to achieve women’s human rights and full equal rights with men.” In short, Chant casts a doubt that if it is not “possible to promote rights through utilitarianism.”
A wide range of scholars and practitioners has criticized that smart economics rather endorse the current status-quo of gender inequality and keep silence for the demand of institutional reform. Its approach “[d]oes not involves public action to transform the laws, policies, and practices which constrain personal and group agency.” Naila Kabeer also posits that “attention to collective action to enable women to challenge structural discrimination has been downplayed.” Simply, smart economics assumes that women are entirely capable of increasingly contributing for economic growth amid the ongoing structural barriers to realize their capabilities.
Sylvia Chant (2008) discredited its approach as ‘feminisation of responsibility and/or obligation’ where the smart economics intends to spur growth simply by demanding more from women in terms of time, labour, energy, and other resources. She also agrees that “Smart economics seeks to use women and girls to fix the world.” She further goes by clarifying that “It is less welcome to women who are already contributing vast amounts to both production and unpaid reproduction to be romanticised and depicted as the salvation of the world.”
Chant is concerned that “An efficiency-driven focus on young women and girls as smart economics leaves this critical part of the global population out.” Smart economics assumes that all women are at their productive stage and fallaciously neglects lives of the elderly women, or women with handicaps. Thus she calls for recognition of “equal rights of all women and girls -regardless of age, or the extent of nature of their economic contribution.” Also, its approach does not talk about cooperation and collaboration between males and females thus leaving men and boys completely out of picture.
Chant emphasize that “The smart economics approach represents, at best, pragmatism in a time of economic restructuring and austerity.” Smart economics can have a wider acceptance and legitimacy because now is the time when efficiency is most demanded, not because its utilitarianism has universal appeal. She further warns that feminists should be very cautious about "supporting, and working in coalition with, individuals and institutions who approach gender equality through the lens of smart economics. This may have attractions in strategic terms, enabling us to access resources for work focusing on supporting the individual agency of women and girls, but risks aggravating many of the complex problems that gender and development seeks to transform."
Alternative Approaches
Other approaches with different paradigms have also played a historically important role in advancing theories and practices in gender and development.
Marxism and Neo-Marxism
The structuralist debate was first triggered by Marxist and socialist feminists. Marxism, particularly through alternative models of state socialist development practiced in China and Cuba, challenged the dominant liberal approach over time. Neo-Marxist proponents focused on the role of the post-colonial state in development in general and also on localized class struggles. Marxist feminists advanced these criticisms towards liberal approaches and made significant contribution to the contemporary debate.
Dependency theory
Dependency theorists opposed that liberal development models, including the attempt to incorporate women into the existing global capitalism, was, in fact, nothing more than the "development of underdevelopment." This view led them to propose that delinking from the structural oppression of global capitalism is the only way to achieve balanced human development.
In the 1980s, there also emerged "a sustained questioning by post-structuralist critics of the development paradigm as a narrative of progress and as an achievable enterprise."
Basic Needs Approach, Capability Approach, and Ecofeminism
Within the liberal paradigm of women and development, various criticism have emerged. The Basic Needs (BN) approach began to pose questions to the focus on growth and income as indicators of development. It was heavily influenced by Sen and Nussbaum's capability approach, which was more gender sensitive than BN and focused on expanding human freedom. The BN particularly proposed a participatory approach to development and challenged the dominant discourse of trickle down effects. These approaches focused on the human freedom led to development of other important concepts such as human development and human security. From a perspective of sustainable development, ecofeminists articulated the direct link between colonialism and environmental degradation, which resulted in degradation of women's lives themselves.
References
Sources
Bertrand, Tietcheu (2006). Being Women and Men in Africa Today: Approaching Gender Roles in Changing African Societies.
Bradshaw, Sarah (May 2013). "Women’s role in economic development: Overcoming the constraints". UNSDSN. UNSDSN. Retrieved 22 November 2013.
Development Assistance Committee (DAC), 1998, p. 7
Eisenstein, Hester (2009). Feminism Seduced: How Global Elites Use Women's Labor and Ideas to Exploit the World. Boulder: Paradigm Publishers. . Retrieved 25 November 2013.
Elizabeth Wilson. Women and the Welfare State. Routledge.
Elson, Diane; Pearson, Ruth (27 September 2013). "Keynote of Diane Elson and Ruth Pearson at the Gender, Neoliberalism and Financial Crisis Conference at the University of York".Soundcloud. Retrieved 27 November 2013.
Frank, Andre Gunder (1969). Capitalism and underdevelopment in Latin America: historical studies of Chile and Brazil (Rev. and enl. ed. ed.). New York: Monthly Review P. .
Fraser, Nancy (2012). "Feminism, Capitalism, and the Cunning of History". Working paper. Fondation Maison des sciences de l'homme. p. 14. Retrieved 2 November 2013.
Harcourt, W. (2016). The Palgrave handbook of gender and development: critical engagements in feminist theory and practice. .
ILO. Employment, growth, and basic needs: a one-world problem: report of the Director-General of the International Labour Office. Geneva: International Labour Office. 1976..
Irene Tinker (1990). Persistent Inequalities: Women and World Development. Oxford University Press. p. 30. .
Jackson, edited by Cecile; Pearson, Ruth (2002). Feminist visions of development: gender analysis and policy (1. publ. ed.). London: Routledge. p. Jeffrey, P., & Jeffrey, R. (1998). Silver Bullet or Passing Fancy? Girl's Schooling and Population Policy. .
Kabeer, Naila (2003). Gender mainstreaming in poverty eradication and the Millennium development goals a handbook for policy-makers and other stakeholders. London: Commonwealth secretariat. .
McRobbie, Angela (2009). The Aftermath of Feminism: Gender, Culture and Social Change. London: Sage. . Retrieved 25 November 2013.
Merchant, Carolyn (1980). The death of nature: women, ecology, and the scientific revolution: a feminist reappraisal of the scientific revolution(First edition. ed.). San Francisco: Harper & Row..
Mies, Maria; Bennholdt-Thomsen, Veronika; Werlhof, Claudia von (1988). Women: the last colony (1. publ. ed.). London: Zed Books..
Moser, Caroline (1993). Gender Planning and Development. Theory, Practice and Training. New York: Routledge. p. 3.
Moser, Caroline O.N. (1995). Gender planning and development: theory, practice and training(Reprint. ed.). London [u.a.]: Routledge..
Nalini Visvanathan ... [et. The women, gender and development reader (2nd ed. ed.). London: Zed Books. p. 29..
New York Times. "Nike Harnesses ‘Girl Effect’ Again.". New York Times, November 10, 2010. Retrieved 1 December 2013.
Pearce, Samir Amin. Transl. by Brian (1976).Unequal development: an essay on the social formations of peripheral capitalism (al-Ṭabʻah 4. ed.). Hassocks: Harvester Pr. .
Plan International.Summary_ENGLISH_lo_resolution.pdf ‘Because I Am a Girl: The State of the World’s Girls 2009. Girls in the Global Economy. Adding it All Up.’. Plan International. p. 11 and 28.
Rankin, Katharine N. (2001). "Governing Development: Neoliberalism, Microcredit, and Rational Economic Woman". Economy and Society (Fondation Maison des sciences de l'homme) 30: 20. Retrieved 2 November 2013.
Rathgeber, Eva M. 1990. “WID, WAD, GAD: Trends in Research and Practice.” The Journal of Developing Areas. 24(4) 289-502
Razavi, S. ‘World Development Report 2012: Gender Equality and Development: An Opportunity Both Welcome and Missed (An Extended Commentary)’. p. 2.
Razavi, Shahrashoub; Miller, Carol (1995)."From WID to GAD: Conceptual shifts in the Women and Development discourse". United Nations Research Institute Occasional Paper series (United Nations Research Institute for Social Development) 1: 2. Retrieved 22 November 2013.
Reeves, Hazel (2000). Gender and Development: Concepts and Definitions. Brighton. p. 8. .
Robert Connell (1987). Gender and power: society, the person, and sexual politics. Stanford University Press. .
Sen, Amartya (2001). Development as freedom(1. publ. as an Oxford Univ. Press paperback ed.). Oxford [u.a.]: Oxford Univ. Press..
Singh, Shweta. (2007). Deconstructing Gender and development for Identities of Women, International Journal of Social Welfare, Issue 16, pages. 100–109.
True, J (2012). Feminist Strategies in Global Governance: Gender Mainstreaming. New York: Routledge. p. 37.
UNICEF (2006). The state of the world's children 2007: women and children: the double dividend of gender equality. United Nations Children's Fund.
UNU. The quality of life a study prepared for the World Institute for Development Economics Research (WIDER) of the United Nations University (Repr. ed.). Oxford: Clarendon Press. 1995. .
"World Bank Gender Overview". World Bank. World Bank. 3 May 2013. Retrieved 5 November 2013.
WDB about page". Women's Development Business. WDB. 2013. Retrieved 28 November 2013.
World Bank (1995). Enhancing Women's Participation in Economic Development(Washington, DC: World Bank). p. 22.
World Bank. "Applying Gender Action Plan Lessons: A Three-Year Road Map for Gender Mainstreaming (2011- 2013).". World Bank Report. World Bank. Retrieved 1 December 2013.
World Bank. "World Development Report 2012: Gender Equality and Development.".World Development Report. World Bank. Retrieved 1 December 2013.
World Bank. Global Monitoring Report 2007: Millennium Development Goals: Confronting the Challenges of Gender Equality and Fragile States (Vol. 4). World Bank-free PDF. p. 145.
Young, edited by Kate; Wolkowitz, Carol; McCullagh, Roslyn (1984). Of marriage and the market: women's subordination internationally and its lessons (2nd ed.). London: Routledge & Kegan Paul. p. Whitehead, A. (1984) ‘I’m hungry, mum: the politics of domestic budgeting.’..
Further reading
Benería, L., Berik, G., & Floro, M. (2003). Gender, development, and globalization: Economics as if all people mattered. New York: Routledge.
Counts, Elad (2008). Small Loans, Big Dreams: How Nobel Prize Winner Muhammad Yunus and Microfinance Are Changing the World. John Wiley & Sons, Incorporated.
Visvanathan, N., Duggan, L., Nisonoff, L., & Wiegersma, N. (Eds.). (2011). The women, gender, and development reader. 2nd edition. New Africa Books.
Ruble, D. N., Martin, C. L., & Berenbaum, S. A. (1998). Gender development. Handbook of child psychology.
Golombok, S., & Fivush, R. (1994). Gender development. Cambridge University Press.
Gender and Development Resources (WIDNET)
Women's rights
Economic development
Feminist economics
Social constructionism | 0.765796 | 0.992508 | 0.760059 |
Integrative psychotherapy | Integrative psychotherapy is the integration of elements from different schools of psychotherapy in the treatment of a client. Integrative psychotherapy may also refer to the psychotherapeutic process of integrating the personality: uniting the "affective, cognitive, behavioral, and physiological systems within a person".
Background
Initially, Sigmund Freud developed a talking cure called psychoanalysis; then he wrote about his therapy and popularized psychoanalysis. After Freud, many different disciplines splintered off. Some of the more common therapies include: psychodynamic psychotherapy, transactional analysis, cognitive behavioral therapy, gestalt therapy, body psychotherapy, family systems therapy, person-centered psychotherapy, and existential therapy. Hundreds of different theories of psychotherapy are practiced.
A new therapy is born in several stages. After being trained in an existing school of psychotherapy, the therapist begins to practice. Then, after follow up training in other schools, the therapist may combine the different theories as a basis of a new practice. Then, some practitioners write about their new approach and label this approach with a new name.
A pragmatic or a theoretical approach can be taken when fusing schools of psychotherapy. Pragmatic practitioners blend a few strands of theory from a few schools as well as various techniques; such practitioners are sometimes called eclectic psychotherapists and are primarily concerned with what works. Alternatively, other therapists consider themselves to be more theoretically grounded as they blend their theories; they are called integrative psychotherapists and are not only concerned with what works, but also why it works.
For example, an eclectic therapist might experience a change in their client after administering a particular technique and be satisfied with a positive result. In contrast, an integrative therapist is curious about the "why and how" of the change as well. A theoretical emphasis is important: for example, the client may only have been trying to please the therapist and was adapting to the therapist rather than becoming more fully empowered in themselves.
Different routes to integration
The most recent edition of the Handbook of Psychotherapy Integration (Norcross & Goldfried, 2005) recognized four general routes to integration: common factors, technical eclecticism, theoretical integration, and assimilative integration.
Common factors
The first route to integration is called common factors and "seeks to determine the core ingredients that different therapies share in common". The advantage of a common factors approach is the emphasis on therapeutic actions that have been demonstrated to be effective. The disadvantage is that common factors may overlook specific techniques that have been developed within particular theories. Common factors have been described by Jerome Frank, Bruce Wampold, and Miller, Duncan and Hubble (2005). Common factors theory asserts it is precisely the factors common to the most psychotherapies that make any psychotherapy successful.
Some psychologists have converged on the conclusion that a wide variety of different psychotherapies can be integrated via their common ability to trigger the neurobiological mechanism of memory reconsolidation in such a way as to lead to deconsolidation.
Technical eclecticism
The second route to integration is technical eclecticism which is designed "to improve our ability to select the best treatment for the person and the problem…guided primarily by data on what has worked best for others in the past". The advantage of technical eclecticism is that it encourages the use of diverse strategies without being hindered by theoretical differences. A disadvantage is that there may not be a clear conceptual framework describing how techniques drawn from divergent theories might fit together. The most well known model of technical eclectic psychotherapy is Arnold Lazarus' (2005) multimodal therapy. Another model of technical eclecticism is Larry E. Beutler and colleagues' systematic treatment selection.
Theoretical integration
The third route to integration commonly recognized in the literature is theoretical integration in which "two or more therapies are integrated in the hope that the result will be better than the constituent therapies alone". Some models of theoretical integration focus on combining and synthesizing a small number of theories at a deep level, whereas others describe the relationship between several systems of psychotherapy. One prominent example of theoretical synthesis is Paul Wachtel's model of cyclical psychodynamics that integrates psychodynamic, behavioral, and family systems theories. Another example of synthesis is Anthony Ryle's model of cognitive analytic therapy, integrating ideas from psychoanalytic object relations theory and cognitive psychotherapy. Another model of theoretical integration is specifically called integral psychotherapy (Forman, 2010; Ingersoll & Zeitler, 2010). The most notable model describing the relationship between several different theories is the transtheoretical model.
Assimilative integration
Assimilative integration is the fourth route and acknowledges that most psychotherapists select a theoretical orientation that serves as their foundation but, with experience, incorporate ideas and strategies from other sources into their practice. "This mode of integration favors a firm grounding in any one system of psychotherapy, but with a willingness to incorporate or assimilate, in a considered fashion, perspectives or practices from other schools". Some counselors may prefer the security of one foundational theory as they begin the process of integrative exploration. Formal models of assimilative integration have been described based on a psychodynamic foundation, and based on cognitive behavioral therapy.
Govrin (2015) pointed out a form of integration, which he called "integration by conversion", whereby theorists import into their own system of psychotherapy a foreign and quite alien concept, but they give the concept a new meaning that allows them to claim that the newly imported concept was really an integral part of their original system of psychotherapy, even if the imported concept significantly changes the original system. Govrin gave as two examples Heinz Kohut's novel emphasis on empathy in psychoanalysis in the 1970s and the novel emphasis on mindfulness and acceptance in "third-wave" cognitive behavioral therapy in the 1990s to 2000s.
Other models that combine routes
In addition to well-established approaches that fit into the five routes mentioned above, there are newer models that combine aspects of the traditional routes.
Clara E. Hill's (2014) three-stage model of helping skills encourages counselors to emphasize skills from different theories during different stages of helping. Hill's model might be considered a combination of theoretical integration and technical eclecticism. The first stage is the exploration stage. This is based on client-centered therapy. The second stage is entitled insight. Interventions used in this stage are based on psychoanalytic therapy. The last stage, the action stage, is based on behavioral therapy.
Good and Beitman (2006) described an integrative approach highlighting both core components of effective therapy and specific techniques designed to target clients' particular areas of concern. This approach can be described as an integration of common factors and technical eclecticism.
Multitheoretical psychotherapy is an integrative model that combines elements of technical eclecticism and theoretical integration. Therapists are encouraged to make intentional choices about combining theories and intervention strategies.
An approach called integral psychotherapy is grounded in the work of theoretical psychologist and philosopher Ken Wilber (2000), who integrates insights from contemplative and meditative traditions. Integral theory is a meta-theory that recognizes that reality can be organized from four major perspectives: subjective, intersubjective, objective, and interobjective. Various psychotherapies typically ground themselves in one of these four foundational perspectives, often minimizing the others. Integral psychotherapy includes all four. For example, psychotherapeutic integration using this model would include subjective approaches (cognitive, existential), intersubjective approaches (interpersonal, object relations, multicultural), objective approaches (behavioral, pharmacological), and interobjective approaches (systems science). By understanding that each of these four basic perspectives all simultaneously co-occur, each can be seen as essential to a comprehensive view of the life of the client. Integral theory also includes a stage model that suggests that various psychotherapies seek to address issues arising from different stages of psychological development.
The generic term, integrative psychotherapy, can be used to describe any multi-modal approach which combines therapies. For example, an effective form of treatment for some clients is psychodynamic psychotherapy combined with hypnotherapy. Kraft & Kraft (2007) gave a detailed account of this treatment with a 54-year-old female client with refractory IBS in a setting of a phobic anxiety state. The client made a full recovery and this was maintained at the follow-up a year later.
Comparison with eclecticism
In Integrative and Eclectic Counselling and Psychotherapy, the authors make clear the distinction between integrative and eclectic psychotherapy approaches: "Integration suggests that the elements are part of one combined approach to theory and practice, as opposed to eclecticism which draws ad hoc from several approaches in the approach to a particular case." Psychotherapy's eclectic practitioners are not bound by the theories, dogma, conventions or methodology of any one particular school. Instead, they may use what they believe or feel or experience tells them will work best, either in general or suiting the often immediate needs of individual clients; and working within their own preferences and capabilities as practitioners.
See also
Integrative body psychotherapy
Journal of Psychotherapy Integration
Notes
References
Beutler, L. E., Consoli, A. J. & Lane, G. (2005). Systematic treatment selection and prescriptive psychotherapy: an integrative eclectic approach. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 121–143). New York: Oxford.
Brooks-Harris, J. E. (2008). Integrative Multitheoretical Psychotherapy. Boston: Houghton-Mifflin.
Castonguay, L. G., Newman, M. G., Borkovec, T. D., Holtforth, M. G. & Maramba, G. G. (2005). Cognitive-behavioral assimilative integration. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 241–260). New York: Oxford.
Ecker, B., Ticic, R., Hulley, L. (2012). Unlocking the Emotional Brain: Eliminating Symptoms at Their Roots Using Memory Reconsolidation. New York: Routledge.
Forman, M. D. (2010). A Guide to Integral Psychotherapy: Complexity, Integration, and Spirituality in Practice. Albany, NY: SUNY Press.
Frank, J. D. & Frank, J. B. (1991). Persuasion and Healing: A Comparative Study of Psychotherapy (3rd ed.). Baltimore, MD: Johns Hopkins University.
Frank, K. A. (1999). Psychoanalytic Participation: Action, Interaction, and Integration. Mahwah, NJ: Analytic Press.
Good, G. E. & Beitman, B. D. (2006). Counseling and Psychotherapy Essentials: Integrating Theories, Skills, and Practices. New York: W. W. Norton.
Govrin, A. (2015). Blurring the threat of 'otherness': integration by conversion in psychoanalysis and CBT. Journal of Psychotherapy Integration, 26(1): 78–90.
Hill, C. E. (2014). Helping Skills: Facilitating Exploration, Insight, and Action (4th ed.). Washington, DC: American Psychological Association.
Ingersoll, E. & Zeitler, D. (2010). Integral Psychotherapy: Inside Out/Outside In. Albany, NY: SUNY Press.
Kraft T. & Kraft D. (2007). Irritable bowel syndrome: symptomatic treatment approaches versus integrative psychotherapy. Contemporary Hypnosis, 24(4): 161–177.
Lane, R. D., Ryan, L., Nadel, L., Greenberg, L. S. (2015). Memory reconsolidation, emotional arousal and the process of change in psychotherapy: new insights from brain science. Behavioral and Brain Sciences, 38: e1.
Lazarus, A. A. (2005). Multimodal therapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 105–120). New York: Oxford.
Messer, S. B. (1992). A critical examination of belief structures in integrative and eclectic psychotherapy. In J. C. Norcross, & M. R. Goldfried, (Eds.), Handbook of Psychotherapy Integration (pp. 130–165). New York: Basic Books.
Miller, S. D., Duncan, B. L., & Hubble, M. A. (2005). Outcome-informed clinical work. In J. C. Norcross, & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 84–102). New York: Oxford.
Norcross, J. C. (2005). A primer on psychotherapy integration. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 3–23). New York: Oxford.
Norcross, J. C. & Goldfried, M. R. (Eds.) (2005). Handbook of Psychotherapy Integration (2nd ed.). New York: Oxford.
Prochaska, J. O. & DiClemente, C. C. (2005). The transtheoretical approach. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 147–171). New York: Oxford.
Ryle, A. (2005). Cognitive analytic therapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 196–217). New York: Oxford.
Stricker, G. & Gold, J. (2005). Assimilative psychodynamic psychotherapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 221–240). New York: Oxford.
Wachtel, P. L., Kruk, J. C., & McKinney, M. K. (2005). Cyclical psychodynamics and integrative relational psychotherapy. In J. C. Norcross & M. R. Goldfried (Eds.), Handbook of Psychotherapy Integration (2nd ed., pp. 172–195). New York: Oxford.
Wampold, B. E. & Imel Z. E. (2015). The Great Psychotherapy Debate: The Evidence for What Makes Psychotherapy Work (2nd ed.). New York: Routledge.
Welling, H. (June 2012). Transformative emotional sequence: towards a common principle of change. Journal of Psychotherapy Integration, 22(2): 109–136.
Wilber, K. (2000). Integral Psychology: Consciousness, Spirit, Psychology, Therapy. Boston: Shambhala.
Woolfe, R. & Palmer, S. (2000). Integrative and Eclectic Counselling and Psychotherapy. London; Thousand Oaks, CA: Sage Publications.
Žvelc, G. & Žvelc, M. (2021). Integrative psychotherapy: A mindfulness- and compassion-oriented approach. Routledge.
Further reading
Fromme, D. K. (2011). Systems of Psychotherapy: Dialectical Tensions and Integration. New York: Springer.
Magnavita, J. J. & Anchin, J. C. (2014). Unifying Psychotherapy: Principles, Methods, and Evidence from Clinical Science. New York: Springer.
Prochaska, J. O. & Norcross, J. C. (2018). Systems of Psychotherapy: A Transtheoretical Analysis (9th ed.). New York: Oxford.
Scaturo, D. J. (2005). Clinical Dilemmas in Psychotherapy: a Transtheoretical Approach to Psychotherapy Integration. Washington, DC: American Psychological Association.
Schneider, K. J. (Ed.) (2008). Existential-Integrative Psychotherapy: Guideposts to the Core of Practice. New York: Routledge.
Schneider, K. J. & Krug, O.T. (2010). Existential-Humanistic Therapy. Washington, DC: American Psychological Association.
Stricker, G. & Gold, J. R. (2006). A Casebook of Psychotherapy Integration. Washington, DC: American Psychological Association.
Urban, W. J. (1978) Integrative Therapy: Foundations of Holistic and Self Healing. Los Angeles: Guild of Tutors Press.
External links
The Problem of Psychotherapy Integration by Tullio Carere
The Rise of Integrative Psychotherapy by John Söderlund
Society for the Exploration of Psychotherapy Integration
International Integrative Psychotherapy Association
Institute for Integrative Psychotherapy and Counselling, Ljubljana
International Journal of Integrative Psychotherapy | 0.772911 | 0.983362 | 0.760051 |
Anthropic principle | The anthropic principle, also known as the observation selection effect, is the hypothesis that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life.
There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail.
Definition and basis
The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility.
The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved.
The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. The anthropic principle is often criticized for lacking falsifiability and therefore its critics may point out that the anthropic principle is a non-scientific concept, even though the weak anthropic principle, "conditions that are observed in the universe must allow the observer to exist", is "easy" to support in mathematics and philosophy (i.e., it is a tautology or truism). However, building a substantive argument based on a tautological foundation is problematic. Stronger variants of the anthropic principle are not tautologies and thus make claims considered controversial by some and that are contingent upon empirical verification.
Anthropic observations
In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory.
Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life.
The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life.
Origin
The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang).
Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics.
Roger Penrose explained the weak form as follows:
One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?"
Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section.
Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem."
Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be.
Variants
Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space.
Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism.
In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows:
Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP.
Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler:
"There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'."
This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves.
"Observers are necessary to bring the Universe into being."
Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner.
"An ensemble of other different universes is necessary for the existence of our Universe."
By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation.
The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes:
Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice.
According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary.
Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe":
Character of anthropic reasoning
Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder.
Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions.
The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle."
The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design.
Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate:
The absurd universe: Our universe just happens to be the way it is.
The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded.
The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist.
Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence.
The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind.
The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP).
The fake universe: Humans live inside a virtual reality simulation.
Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005).
Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994).
The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links.
Observational evidence
No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist.
Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following:
Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants;
Various theories for generating multiple universes will prove robust;
Evidence that the universe is fine tuned will continue to accumulate;
No life with a non-carbon chemistry will be discovered;
Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe.
Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life.
Probabilistic predictions of parameter values can be made given:
a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and
an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe).
The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense.
One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers.
Applications of the principle
The nucleosynthesis of carbon-12
Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction.
However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance.
Cosmic inflation
Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require.
String theory
String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed.
Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Lubos Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present.
Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe.
Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life.
Dimensions of spacetime
There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue.
The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204).
In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse.
Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us.
On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed.
In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks.
Metaphysical interpretations
Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that
William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point.
The anthropic cosmological principle
A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way.
The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks.
Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out.
Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas.
In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP):
Reception and controversies
Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects.
A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts."
Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another.
Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result.
Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa.
Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc.
Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe.
The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours:
See also
(discussing the anthropic principle)
(an immediate precursor of the idea)
(work of Alejandro Jenkins)
Notes
Footnotes
References
5 chapters available online.
Stenger, Victor J. (1999), "Anthropic design", The skeptical inquirer 23 (August 31, 1999): 40–43
Mosterín, Jesús (2005). "Anthropic explanations in cosmology". In P. Háyek, L. Valdés and D. Westerstahl (ed.), Logic, methodology and philosophy of science, Proceedings of the 12th international congress of the LMPS. London: King's college publications, pp. 441–473. .
A simple anthropic argument for why there are 3 spatial and 1 temporal dimensions.
Shows that some of the common criticisms of anthropic principle based on its relationship with numerology or the theological design argument are wrong.
External links
Nick Bostrom: web site devoted to the anthropic principle.
Friederich, Simon. Fine-tuning, review article of the discussion about fine-tuning, highlighting the role of the anthropic principles.
Gijsbers, Victor. (2000). Theistic anthropic principle refuted – Positive atheism magazine.
Chown, Marcus, Anything Goes, New scientist, 6 June 1998. On Max Tegmark's work.
Stephen Hawking, Steven Weinberg, Alexander Vilenkin, David Gross and Lawrence Krauss: Debate on anthropic reasoning Kavli-CERCA conference video archive.
Sober, Elliott R. 2009, "Absence of evidence and evidence of absence – Evidential transitivity in connection with fossils, fishing, fine-tuning, and firing squads." Philosophical Studies, 2009, 143: 63–90.
"Anthropic coincidence" – The anthropic controversy as a segue to Lee Smolin's theory of cosmological natural selection.
Leonard Susskind and Lee Smolin debate the anthropic principle.
Debate among scientists on arxiv.org.
Evolutionary probability and fine tuning
Benevolent design and the anthropic principle at MathPages
Critical review of "The privileged planet"
The anthropic principle – a review.
Berger, Daniel, 2002, "An impertinent résumé of the Anthropic cosmological principle. " A critique of Barrow & Tipler.
Jürgen Schmidhuber: Papers on algorithmic theories of everything and the anthropic principle's lack of predictive power.
Paul Davies: Cosmic jackpot – Interview about the anthropic principle (starts at 40 min), 15 May 2007.
Astronomical hypotheses
Concepts in epistemology
Physical cosmology
Principles
Religion and science | 0.761129 | 0.998542 | 0.760019 |
Emile, or On Education | Emile, or On Education is a treatise on the nature of education and on the nature of man written by Jean-Jacques Rousseau, who considered it to be the "best and most important" of all his writings. Due to a section of the book entitled "Profession of Faith of the Savoyard Vicar", Emile was banned in Paris and Geneva and was publicly burned in 1762, the year of its first publication. It was forbidden by the Church being listed on the Index Librorum Prohibitorum. During the French Revolution, Emile served as the inspiration for what became a new national system of education.
Politics and philosophy
The work tackles fundamental political and philosophical questions about the relationship between the individual and society: how, in particular, the individual might retain what Rousseau saw as innate human goodness but remain part of a corrupting collectivity. It has a famous opening sentence: "Everything is good as it leaves the hands of the Author of things; everything degenerates in the hands of man".
Rousseau seeks to describe a system of education that would enable the natural man he identifies in The Social Contract (1762) to survive corrupt society. He employs the novelistic device of Emile and his tutor to illustrate how such an ideal citizen might be educated. Emile is scarcely a detailed parenting guide but contains some specific advice on raising children. It is regarded by some as the first philosophy of education in Western culture to have a serious claim to completeness and as one of the first Bildungsroman novels.
Book divisions
The text is divided into five books: the first three are dedicated to the child Emile, the fourth to an exploration of the adolescent, and the fifth to outlining the education of his female counterpart Sophie, as well as to Emile's domestic and civic life.
Book I
In Book I, Rousseau discusses not only his fundamental philosophy but also begins to outline how one would have to raise a child to conform with that philosophy. He begins with the early physical and emotional development of the infant and the child.
Emile attempts to "find a way of resolving the contradictions between the natural man who is 'all for himself' and the implications of life in society". The famous opening line does not bode well for the educational project—"Everything is good as it leaves the hands of the Author of things; everything degenerates in the hands of man". But Rousseau acknowledges that every society "must choose between making a man or a citizen" and that the best "social institutions are those that best know how to denature man, to take his absolute existence from him in order to give him a relative one and transport the I into the common unity". To "denature man" for Rousseau is to suppress some of the "natural" instincts that he extols in The Social Contract, published the same year as Emile, but while it might seem that for Rousseau such a process would be entirely negative, this is not so. Emile does not lament the loss of the noble savage. Instead, it is an effort to explain how natural man can live within society.
Many of Rousseau's suggestions in this book are restatements of the ideas of other educational reformers. For example, he endorses Locke's program of "harden[ing children's] bodies against the intemperance of season, climates, elements; against hunger, thirst, fatigue". He also emphasizes the perils of swaddling and the benefits of mothers nursing their own infants. Rousseau's enthusiasm for breastfeeding led him to argue: "[B]ut let mothers deign to nurse their children, morals will reform themselves, nature's sentiments will be awakened in every heart, the state will be repeopled"—a hyperbole that demonstrates Rousseau's commitment to grandiose rhetoric. As Peter Jimack, the noted Rousseau scholar, argues: "Rousseau consciously sought to find the striking, lapidary phrase which would compel the attention of his readers and move their hearts, even when it meant, as it often did, an exaggeration of his thought". And, in fact, Rousseau's pronouncements, although not original, affected a revolution in swaddling and breastfeeding.
Book II
The second book concerns the initial interactions of the child with the world. Rousseau believed that at this phase the education of children should be derived less from books and more from the child's interactions with the world, with an emphasis on developing the senses, and the ability to draw inferences from them. Rousseau concludes the chapter with an example of a boy who has been successfully educated through this phase. The father takes the boy out flying kites, and asks the child to infer the position of the kite by looking only at the shadow. This is a task that the child has never specifically been taught, but through inference and understanding of the physical world, the child is able to succeed in his task. In some ways, this approach is the precursor of the Montessori method.
Book III
The third book concerns the selection of a trade. Rousseau believed it necessary that the child must be taught a manual skill appropriate to his sex and age, and suitable to his inclinations, by worthy role models.
Book IV
Once Emile is physically strong and learns to carefully observe the world around him, he is ready for the last part of his education—sentiment: "We have made an active and thinking being. It remains for us, in order to complete the man, only to make a loving and feeling being—that is to say, to perfect reason by sentiment". Emile is a teenager at this point and it is only now that Rousseau believes he is capable of understanding complex human emotions, particularly sympathy. Rousseau argues that, while a child cannot put himself in the place of others, once he reaches adolescence and becomes able to do so, Emile can finally be brought into the world and socialized.
In addition to introducing a newly passionate Emile to society during his adolescent years, the tutor also introduces him to religion. According to Rousseau, children cannot understand abstract concepts such as the soul before the age of about fifteen or sixteen, so to introduce religion to them is dangerous. He writes: "It is a lesser evil to be unaware of the divinity than to offend it". Moreover, because children are incapable of understanding the difficult concepts that are part of religion, he points out that children will only recite what is told to them—they are unable to believe.
Book IV also contains the famous "Profession of Faith of the Savoyard Vicar", the section that was largely responsible for the condemnation of Emile and the one most frequently excerpted and published independently of its parent tome. Rousseau writes at the end of the "Profession": "I have transcribed this writing not as a rule for the sentiments that one ought to follow in religious matters, but as an example of the way one can reason with one's pupil in order not to diverge from the method I have tried to establish". Rousseau, through the priest, leads his readers through an argument which concludes only to belief in "natural religion": "If he must have another religion", Rousseau writes (that is, beyond a basic "natural religion"), "I no longer have the right to be his guide in that".
Book V
In Book V, Rousseau turns to the education of Sophie, Emile's wife-to-be.
Rousseau begins his description of Sophie, the ideal woman, by describing the inherent differences between men and women in a famous passage:
In what they have in common, they are equal. Where they differ, they are not comparable. A perfect woman and a perfect man ought not to resemble each other in mind any more than in looks, and perfection is not susceptible of more or less. In the union of the sexes each contributes equally to the common aim, but not in the same way. From this diversity arises the first assignable difference in the moral relations of the two sexes.
For Rousseau, "everything man and woman have in common belongs to the species, and ... everything which distinguishes them belongs to the sex". Rousseau states that women should be "passive and weak", "put up little resistance" and are "made specially to please man"; he adds, however, that "man ought to please her in turn", and he explains the dominance of man as a function of "the sole fact of his strength", that is, as a strictly "natural" law, prior to the introduction of "the law of love".
Rousseau's stance on female education, much like the other ideas explored in Emile, "crystallize existing feelings" of the time. During the eighteenth century, women's education was traditionally focused on domestic skills—including sewing, housekeeping, and cooking—as they were encouraged to stay within their suitable spheres, which Rousseau advocates.
Rousseau's brief description of female education sparked an immense contemporary response, perhaps even more so than Emile itself. Mary Wollstonecraft, for example, dedicated a substantial portion of her chapter "Animadversions on Some of the Writers who have Rendered Women Objects of Pity, Bordering on Contempt" in A Vindication of the Rights of Woman (1792) to attacking Rousseau and his arguments.
When responding to Rousseau's argument in A Vindication of the Rights of Woman, Wollstonecraft directly quotes Emile in Chapter IV of her piece:
"Educate women like men," says Rousseau [in Emile], "and the more they resemble our sex the less power will they have over us." This is the very point I aim at. I do not wish them to have power over men; but over themselves.
French writer Louise d'Épinay's Conversations d'Emilie made her disagreement with Rousseau's take on female education clear as well. She believes that females' education affects their role in society, not natural differences as Rousseau argues.
Rousseau also touches on the political upbringing of Emile in book V by including a concise version of his Social Contract in the book. His political treatise The Social Contract was published in the same year as Emile and was likewise soon banned by the government for its controversial theories on general will. The version of this work in Emile, however, does not go into detail concerning the tension between the Sovereign and the Executive, but instead refer the reader to the original work.
Émile et Sophie
In the incomplete sequel to Emile, Émile et Sophie (English: Emilius and Sophia), published after Rousseau's death, Sophie is unfaithful (in what is hinted at might be a drugged rape), and Emile, initially furious with her betrayal, remarks "the adulteries of the women of the world are not more than gallantries; but Sophia an adulteress is the most odious of all monsters; the distance between what she was, and what she is, is immense. No! there is no disgrace, no crime equal to hers". He later relents somewhat, blaming himself for taking her to a city full of temptation, but he still abandons her and their children. Throughout the agonized internal monologue, represented through letters to his old tutor, he repeatedly comments on all of the affective ties that he has formed in his domestic life—"the chains [his heart] forged for itself". As he begins to recover from the shock, the reader is led to believe that these "chains" are not worth the price of possible pain—"By renouncing my attachments to a single spot, I extended them to the whole earth, and, while I ceased to be a citizen, became truly a man". While in La Nouvelle Héloïse the ideal is domestic, rural happiness (if not bliss), in Emile and its sequel the ideal is "emotional self-sufficiency which was the natural state of primitive, pre-social man, but which for modern man can be attained only by the suppression of his natural inclinations". According to Dr. Wilson Paiva, member of the Rousseau Association, "[L]eft unfinished, Émile et Sophie reminds us of Rousseau's incomparable talent for producing a brilliant conjugation of literature and philosophy, as well as a productive approach of sentiment and reason through education".
Reviews
Rousseau's contemporary and philosophical rival Voltaire was critical of Emile as a whole, but admired the section in the book which had led to it being banned (the section titled "Profession of Faith of the Savoyard Vicar"). According to Voltaire, Emile is
However, Voltaire went on to endorse the Profession of Faith section and called it "fifty good pages... it is regrettable that they should have been written by... such a knave".
The German scholar Goethe wrote in 1787 that "Emile and its sentiments had a universal influence on the cultivated mind".
See also
Original Stories from Real Life, a response text written by Mary Wollstonecraft
Robinson Crusoe
Notes
References
33. Paiva, Wilson A. Discussing human connectivity in Rousseau as a pedagogical issue. Educ. Pesqui., São Paulo, v. 45, e191470, 2019. Link: http://educa.fcc.org.br/pdf/ep/v45/en_1517-9702-ep-45-e191470.pdf
Bibliography
Bloch, Jean. Rousseauism and Education in Eighteenth-century France. Oxford: Voltaire Foundation, 1995.
Boyd, William. The Educational Theory of Jean Jacques Rousseau. New York: Russell & Russell, 1963.
Jimack, Peter. Rousseau: Émile. London: Grant and Cutler, Ltd., 1983.
Rousseau, Jean-Jacques Rousseau. Emile, or On Education. Trans. Allan Bloom. New York: Basic Books, 1979.
Rousseau, Jean-Jacques Rousseau. Emilius and Sophia; or, The Solitaries. London: Printed by H. Baldwin, 1783.
Trouille, Mary Seidman. Sexual Politics in the Enlightenment: Women Writers Read Rousseau. Albany, NY: State University of New York Press, 1997.
External links
The Emile of Jean-Jacques Rousseau at Columbia.edu – complete French text and English translation by Grace G. Roosevelt (an adaptation and revision of the Foxley translation)
in an English translation by Barbara Foxley
Rousseau's Émile; or, Treatise on education (abridged English translation by William Harold Wayne; 1892) at Archive.org
1762 novels
Works by Jean-Jacques Rousseau
French bildungsromans
Education novels
French philosophical novels
History of education
Philosophy of education
Treatises
Censored books
Novels about education
Pedagogical publications | 0.765157 | 0.993264 | 0.760003 |
Goldilocks principle | The Goldilocks principle is named by analogy to the children's story "Goldilocks and the Three Bears", in which a young girl named Goldilocks tastes three different bowls of porridge and finds she prefers porridge that is neither too hot nor too cold but has just the right temperature. The concept of "just the right amount" is easily understood and applied to a wide range of disciplines, including developmental psychology, biology, astronomy, economics and engineering.
Applications
In cognitive science and developmental psychology, the Goldilocks effect or principle refers to an infant's preference to attend events that are neither too simple nor too complex according to their current representation of the world. This effect was observed in infants, who are less likely to look away from a visual sequence when the current event is moderately probable, as measured by an idealized learning model.
In astrobiology, the Goldilocks zone refers to the habitable zone around a star. As Stephen Hawking put it, "Like Goldilocks, the development of intelligent life requires that planetary temperatures be 'just right. The Rare Earth hypothesis uses the Goldilocks principle in the argument that a planet must be neither too far away from nor too close to a star and galactic centre to support life, while either extreme would result in a planet incapable of supporting life. Such a planet is colloquially called a "Goldilocks Planet". Paul Davies has argued for the extension of the principle to cover the selection of our universe from a (postulated) multiverse: "Observers arise only in those universes where, like Goldilocks' porridge, things are by accident 'just right.
In medicine, it can refer to a drug that can hold both antagonist (inhibitory) and agonist (excitatory) properties. For example, the antipsychotic Aripiprazole causes not only antagonism of dopamine D2 receptors in areas such as the mesolimbic area of the brain (which shows increased dopamine activity in psychosis) but also agonism of dopamine receptors in areas of dopamine hypoactivity, such as the mesocortical area.
In economics, a Goldilocks economy sustains moderate economic growth and low inflation, which allows a market-friendly monetary policy. A Goldilocks market occurs when the price of commodities sits between a bear market and a bull market. Goldilocks pricing, also known as good–better–best pricing, is a marketing strategy that uses product differentiation to offer three versions of a product to corner different parts of the market: a high-end version, a middle version, and a low-end version.
In communication, the Goldilocks principle describes the amount, type, and detail of communication necessary in a system to maximise effectiveness while minimising redundancy and excessive scope on the "too much" side and avoiding incomplete or inaccurate communication on the "too little" side.
In statistics, the "Goldilocks Fit" references a linear regression model that represents the perfect flexibility to reduce the error caused by bias and variance.
In the design sprint, the "Goldilocks Quality" means to create a prototype with just enough quality to evoke honest reactions from customers.
In machine learning, the Goldilocks learning rate is the learning rate that results in an algorithm taking the fewest steps to achieve minimal loss. Algorithms with a learning rate that is too large often fail to converge at all, while those with too small a learning rate take too long to converge.
See also
Cosmic Jackpot
Frugality
Anthropic principle
Big History
Fine-tuned universe
Golden mean (philosophy)
Anna Karenina principle
References
Astronomical hypotheses
Articles containing video clips
Goldilocks and the Three Bears
de: Goldlöckchen-Prinzip | 0.763443 | 0.995485 | 0.759996 |
Restorative practices | Restorative practices (or RP) is a social science field concerned with improving and repairing relationships and social connections among people. Whereas a zero tolerance social mediation system prioritizes punishment, RP privileges the repair of harm and dialogue among actors. In fact, the purpose of RP is to build healthy communities, increase social capital, decrease crime and antisocial behavior, mend harm and restore relationships. It ties together research in a variety of social science fields, including education, psychology, social work, criminology, sociology, organizational development and leadership. RP has been growing in popularity since the early 2000s and varying approaches exist.
Overview
The social science of restorative practices offers a common thread to tie together theory, research and practice in diverse fields such as education, counseling, criminal justice, social work and organizational management. Individuals and organizations in many fields are developing models and methodology and performing empirical research that share the same implicit premise, but are often unaware of the commonality of each other's efforts.
In education, restorative practices, such as circles and groups, provide opportunities for students to share their feelings, build relationships and solve problems, and when there is wrongdoing, to play an active role in addressing the wrong and making things right. Schools that implement restorative practices (RP) have been found to provide safe school environments through building quality relationships and a supportive community. Further, urban educators who carry out RP have observed a decrease in disciplinary issues and absenteeism, a heightened sense of community, as well as an increase in school safety and instructional time.
For example, in criminal justice, restorative circles and restorative conferences allow victims, offenders and their respective family members and friends to come together to explore how everyone has been affected by an offense and, when possible, to decide how to repair the harm and meet their own needs. In England's Criminal Justice System (CJS), prisons use RP to stimulate positive social interactions and decrease tension when situational challenges arise. Introduced in the 1990s in some of Europe's CJS, RP has improved relationships between the prisons' residents and their relatives through restorative family interventions.
In social work, family group decision-making (FGDM) or family group conferencing (FGC) processes empower extended families to meet privately, without professionals in the room, to make a plan to protect children in their own families from further violence and neglect or to avoid residential placement outside their own homes.
These various fields employ different terms, all of which fall under the rubric of restorative practices: In the criminal justice field the phrase used is "restorative justice"; in social work the term employed is "empowerment"; in education, talk is of "positive discipline" or "the responsive classroom"; and in organizational leadership "horizontal management" is referenced. The social science of restorative practices recognizes all of these perspectives and incorporates them into its scope.
Functions
The use of restorative practices has the potential to:
reduce crime, violence and bullying
improve human behavior
strengthen civil society
provide effective leadership
restore relationships
repair harm
History
Restorative practices has its roots in restorative justice, a way of looking at criminal justice that emphasizes repairing the harm done to people and relationships rather than only punishing offenders.
In the modern context, restorative justice originated in the 1970s as mediation or reconciliation between victims and offenders. In Elmira, Ontario, Canada, near Kitchener, in 1974 Mark Yantzi, a probation officer, arranged for two teenagers to meet directly with their victims following a vandalism spree and agree to restitution. The positive response by the victims led to the world's first victim-offender reconciliation program, in Kitchener, with the support of the Mennonite Central Committee and collaboration with the local probation department. The concept subsequently acquired various names, such as victim-offender mediation and victim-offender dialogue as it spread through North America and to Europe through the 1980s and 1990s.
Restorative justice echoes ancient and indigenous practices employed in cultures all over the world, from Native American and First Nations to African, Asian, Celtic, Hebrew, Arab and many others.
Eventually modern restorative justice broadened to include communities of care as well, with victims' and offenders' families and friends participating in collaborative processes called conferences and circles. Conferencing addresses power imbalances between the victim and offender by including additional supporters. In the 2010s, federal and local governments in the US, as well as community organizations, requested schools decrease suspension rates. To provide an alternative to disciplinary measures like suspension, large urban school districts, like New York City Public Schools and the Los Angeles Unified School District, started implementing RP.
A major aspect of any restorative practice is neutrality. Though restorative practice aim to resolve issues within a group, the facilitation of the resolution is supposed to remain impartial. It is, therefore, important that facilitators of any restorative practice are neutral to the situation at issue. Some researchers also classify the study of restorative practice through the concept of process and values. In this framework, process refers to the specific actions taken to repair harms and/or build community. Values refer to the overarching principals that guide those actions and that differ from more traditional justice that may be punitive.
Terminology
Family group conference
The family group conference (FGC) started in New Zealand in 1989 as a response to native Māori people's concerns with the number of their children being removed from their homes by the courts. It was originally envisioned as a family empowerment process, not as restorative justice. In North America it was renamed family group decision making (FGDM).
Restorative conferences
In 1991 the FGC was adapted by an Australian police officer, Terry O'Connell, as a community policing strategy to divert young people from court, into a restorative process often called a restorative conference. It has been called other names, such as a community accountability conference and victim-offender conference. In 1994 Marg Thorsborne, an Australian educator, was the first to use a restorative conference in a school.
Circles
A "circle" is a versatile restorative practice that can be used proactively, to develop relationships and build community or reactively, to respond to wrongdoing, conflicts and problems. Circles give people an opportunity to speak and listen to one another in an atmosphere of safety, decorum and equality. The circle process allows people to tell their stories and offer their own perspectives.
The circle has a wide variety of purposes: conflict resolution, healing, support, decision making, information exchange and relationship development. Circles offer an alternative to contemporary meeting processes that often rely on hierarchy, win-lose positioning and argument.
Circles can be used in any organizational, institutional or community setting. Circle time and morning meetings have been widely used in primary and elementary schools for many years and more recently in secondary schools and higher education. In industry, the quality circle has been employed for decades to engage workers in achieving high manufacturing standards. In 1992 Yukon Circuit Court Judge Barry Stewart pioneered the sentencing circle, which involved community members in helping to decide how to deal with an offender. In 1994 Mennonite Pastor Harry Nigh befriended a mentally challenged repeat sex offender by forming a support group with some of his parishioners, called a circle of support and accountability, which was effective in preventing re-offending.
Circles can be both proactive and reactive. Proactive circles aim to create a positive classroom or environmental climate as facilitators solicit the expression of opinions and ideas in a safe environment. Reactive circles, often called restorative circles, work in conjunction with proactive circles. When a specific behavior or incident impacts individuals in the class or group, restorative circles aim to restore the climate and culture of the group through conflict resolution. Sometimes specific restorative conferences may transpire, which are direct and individual conferences between specific parties to discuss and resolve troubling behaviors and emotions.
Difference between restorative justice and restorative practices
The notion of restorative practices evolved in part from the concept and practices of restorative justice. But from the emergent point of view of restorative practices, restorative justice can be viewed as largely reactive, consisting of formal or informal responses to crime and other wrongdoing after it occurs. Restorative practices also includes the use of informal and formal processes that precede wrongdoing, those that proactively build relationships and a sense of community to prevent conflict and wrongdoing.
Other terminology
The term restorative practices, along with terms like restorative approaches, restorative justice practices and restorative solutions, are increasingly used to describe practices related to or derived from restorative conferences and circles. These practices also include more informal practices (see Restorative Practices Continuum).
Use of restorative practices is now spreading worldwide, in education, criminal justice, social work, counseling, youth services, workplace, college residence hall and faith community applications. Notably, restorative practices can and do serve as reactionary tools in these settings but have also been successful when implemented as proactive pedagogy.
Restorative practices continuum
Restorative practices are not limited to formal processes, such as restorative conferences or family group conferences, but range from informal to formal. On a restorative practices continuum, the informal practices include affective statements that communicate people's feelings, as well as affective questions that cause people to reflect on how their behavior has affected others. Impromptu restorative conferences, groups and circles are somewhat more structured but do not require the elaborate preparation needed for formal conferences. Moving from left to right on the continuum, as restorative practices become more formal, they involve more people, require more planning and time, and are more structured and complete. Although a formal restorative process might have dramatic impact, informal practices have a cumulative impact because they are part of everyday life.
The aim of restorative practices is to develop community and to manage conflict and tensions by repairing harm and building relationships. This statement identifies both proactive (building relationships and developing community) and reactive (repairing harm and restoring relationships) approaches. Organizations and services that only use the reactive without building the social capital beforehand are less successful than those that also employ the proactive.
Social discipline window
The social discipline window is a concept with broad application in many settings. It describes four basic approaches to maintaining social norms and behavioral boundaries. The four are represented as different combinations of high or low control and high or low support. The restorative domain combines both high control and high support and is characterized by doing things with people (collaboratively), rather than to them (coercively) or for them (without their involvement).
The social discipline window also defines restorative practices as a leadership model for parents in families, teachers in classrooms, administrators and managers in organizations, police and social workers in communities and judges and officials in government. The fundamental unifying hypothesis of restorative practices is that "people are happier, more cooperative and productive, and more likely to make positive changes when those in positions of authority do things with them, rather than to them or for them." This hypothesis maintains that the punitive and authoritarian to mode and the permissive and paternalistic for mode are not as effective as the restorative, participatory, engaging with mode.
The social discipline window reflects the seminal thinking of renowned Australian criminologist John Braithwaite, who has asserted that reliance on punishment as a social regulator is problematic because it shames and stigmatizes wrongdoers, pushes them into a negative societal subculture and fails to change their behavior. The restorative approach, on the other hand, reintegrates wrongdoers back into their community and reduces the likelihood that they will reoffend.
Implementations of restorative practices
Educational system
There has been an accumulation of RP experiences in schools. Research on these seems to validate that RP has led to a decrease in disciplinary measures and slight diminishment in racial exclusionary gaps. One goal of RP has been to close the racial disciplinary gap since students of color, especially African American children, are suspended more frequently than white students. According to a 2018 US Office of Civil Rights study of the 2015-16 school year, Black boys made up approximately one twelfth (8%) of enrolled students but one fourth (25%) of suspended students.
In a 2020 survey of fifth and eighth graders, students found RP's restorative circles (RC) as a valuable method of expression and of sharing perspectives about problems. Students use RP as a way to express their thoughts and feelings, and encourage intercommunication. Schools have used classroom conferencing to address disruption that has had an effect on learning. In such a situation, RP has helped teachers and students discuss behavioral expectations from one another. In New Zealand, schools have experienced best restorative outcomes when all parties actively participate and understand how the problem originated, what should be done, and how the parties can reach a shared commitment that the issue not repeat itself.
Prison system
RP has served to attend concerns of legitimacy, fairness, and accountability. Restorative conversations and circles, and family interventions, have played a positive role in building relationships between residents, officers, and families. In one of England's prisons, residents and officers made use of a restorative circle to resolve a kitchen issue. Since the residents left the kitchen untidy on repeated occasions, the officers punitively closed the kitchen for a couple of days. However, the closing of the kitchen created bitterness among the residents, one of whom proposed to carry out a restorative circle to establish a kitchen code of conduct. Initially hesitant to participate, the officers eventually helped mediate the residents' agreement; the officers' presence provided a sense of security to the prisoners.
Criticisms
There have been criticisms of RP from different perspectives. RP interventions among elementary-aged school children seem to be more impactful than among early teens or teenaged children. The effectiveness of interventions across grade levels must be examined. Additionally, RP expectations may be unrealistic. Out of numerous RP components, schools may only implement RP circles yet await a shift in school climate. In prison systems, RP is viewed as a soft option and counter to prison values by some officers.
References
External links
International Institute for Restorative Practices
Restorative Practices International
Community building
Social economy
Socioeconomics | 0.78126 | 0.97271 | 0.75994 |
Cultural variation | Cultural variation refers to the rich diversity in social practices that different cultures exhibit around the world. Cuisine and art all change from one culture to the next, but so do gender roles, economic systems, and social hierarchy among any number of other humanly organised behaviours. Cultural variation can be studied across cultures (for example, a cross-cultural study of ritual in Indonesia and Brazil) or across generations (for example, a comparison of Generation X and Generation Y) and is often a subject studied by anthropologists, sociologists and cultural theorists with subspecialties in the fields of economic anthropology, ethnomusicology, health sociology etc. In recent years, cultural variation has become a rich source of study in neuroanthropology, cultural neuroscience, and social neuroscience.
See also
Cultural diversity
Cultural anthropology
Cultural studies
Culture theory
Neuroanthropology
References
Further reading
Lende, D. H., & Downey, G. (2012). The encultured brain: an introduction to neuroanthropology. MIT press.
External links
Global Sociology
Cultural geography
Cultural economics
Cultural politics
Multiculturalism
Majority–minority relations | 0.78201 | 0.971744 | 0.759914 |
SECI model of knowledge dimensions | The SECI model of knowledge dimensions (or the Nonaka-Takeuchi model) is a model of knowledge creation that explains how tacit and explicit knowledge are converted into organizational knowledge. The aim is to change the explicit knowledge of the model back into the tacit knowledge of the employees. In this case, employees' tacit knowledge can be kept in the organization. When employees express their thoughts and ideas openly and share their best working practices, it can lead to new innovations and help to make operations more efficient.
The SECI model distinguishes four knowledge dimensions (forming the "SECI" acronym): Socialization, Externalization, Combination, and Internalization. The model was originally developed by Ikujiro Nonaka in 1990 and later further refined by Hirotaka Takeuchi.
Four modes of knowledge conversion
Assuming that knowledge is created through the interaction between tacit and explicit knowledge, four different modes of knowledge conversion can be postulated: from tacit knowledge to tacit knowledge (socialization), from tacit knowledge to explicit knowledge (externalization), from explicit knowledge to explicit knowledge (combination), and from explicit knowledge to tacit knowledge (internalization).
Four modes of knowledge conversion:
Socialization (Tacit to Tacit) – Socialization is a process of sharing knowledge, including observation, imitation, and practice through apprenticeship. Apprentices work with their teachers or mentors to gain knowledge by imitation, observation, and practice. In effect, socialization is about capturing knowledge by physical proximity, wherein direct interaction is a supported method to acquire knowledge. Socialization comes from sharing the experience with others. It also can come from direct interactions with customers and from inside your own organization, just by interacting with another section or working group. For example, brainstorming with colleagues. The tacit knowledge is transferred by common activity in the organizations, such as being together and living in the same environment.
Externalization (Tacit to Explicit) – Externalization is the process of making tacit knowledge explicit, wherein knowledge is crystallized and is thus able to be shared by others, becoming the basis of new knowledge. At this point, personal tacit knowledge becomes useful to others as well, because it is expressed in a form that can be interpreted and understood. Concepts, images, and written documents, for example, can support this kind of interaction.
Combination (Explicit to Explicit) – Combination involves organizing and integrating knowledge, whereby different types of explicit knowledge are merged (for example, in building prototypes). The creative use of computerized communication networks and large-scale databases can support this mode of knowledge conversion: explicit knowledge is collected from inside or outside the organization and then combined, edited, or processed to form new knowledge. The new explicit knowledge is then disseminated among the members of the organization.
Internalization (Explicit to Tacit) – Internalization involves the receiving and application of knowledge by an individual, enclosed by learning-by-doing. On the other hand, explicit knowledge becomes part of an individual's knowledge and will be assets for an organization. Internalization is also a process of continuous individual and collective reflection, as well as the ability to see connections and recognize patterns, and the capacity to make sense between fields, ideas, and concepts.
Above mentioned four modes of knowledge conversion form a spiral of knowledge creation. Since knowledge creation can be seen as a continual process, the spiral evolves continuously through these four modes of knowledge conversion. Moreover, with the four modes of knowledge conversion, the interaction that takes place between tacit and explicit knowledge is strengthened in a spiral.
Nonaka and Konno subsequently developed the SECI model by introducing the Japanese concept of 'Ba', which roughly translates as 'place'. Ba can be thought of as a shared context or shared space in which knowledge is shared, created, and utilized. It is a concept that unifies physical space such as an office space, virtual space such as e-mail, and mental space such as shared ideas.
Acceptance
Nonaka’s and Takeuchi’s SECI model is widely known and has achieved paradigmatic status. Perceived advantages of the model include:
its appreciation of the dynamic nature of knowledge and knowledge creation.
it provides a framework for the management of the relevant processes.
The model has also been much criticized at times. Criticisms include:
It is based on a study of Japanese organizations, which heavily rely on tacit knowledge: employees are often with a company for life.
The linearity of the concept: can the spiral jump steps? Can it go counter-clockwise? Since the model is bi-directional with only two nodes, the answer is yes, but so what? An example would be an elevator in a two-story building. While it may have numbers for the floor to push to go to, it could just as easily function with only a "go" button.
Stephen Gourlay (2006) has considered why knowledge conversion has to begin with socialization if tacit knowledge is the source of new knowledge. Knowledge conversion could also begin for example with combination because new knowledge creation would begin with the creative synthesis of explicit knowledge.
The model does not explain at all how new ideas and solutions are developed in practice.
See also
Four stages of competence
I-Space (conceptual framework)
Tacit knowledge
Explicit knowledge
Organizational learning
References
Further reading
Nonaka, Ikujiro, and Hirotaka Takeuchi. 1995. The knowledge creating company: how Japanese companies create the dynamics of innovation. New York: Oxford University Press. ISBN 978-0-19-509269-1.
Seufert, A., G. Von Krogh, and A. Bach. 1999. "Towards knowledge networking." Journal of Knowledge Management 3(3):180–90.
.
Xu, F. 2013. "The Formation and Development of Ikujiro Nonaka's Knowledge Creation Theory." Pp. 60-76 in Towards Organizational Knowledge: The Pioneering Work of Ikujiro Nonaka, edited by G. von Krogh et al. Basingstoke, UK: Palgrave Macmillan.
Kahrens, M., & Früauff, D. H. (2018). Critical evaluation of Nonaka’s SECI model. The Palgrave Handbook of Knowledge Management, 53-83.
Knowledge management | 0.773405 | 0.982549 | 0.759908 |
English for specific purposes | English for specific purposes (ESP) is a subset of English as a second or foreign language. It usually refers to teaching the English language to university students or people already in employment, with reference to the particular vocabulary and skills they need. As with any language taught for specific purposes, a given course of ESP will focus on more
occupation or profession, such as Technical English, Scientific English, English for medical professionals, English for waiters, English for tourism, etc. Despite the seemingly limited focus, a course of ESP can have a wide-ranging impact, as is the case with Environmental English.
English for academic purposes, taught to students before or during their degrees, is one sort of ESP, as is Business English. Aviation English is taught to pilots, air traffic controllers and civil aviation cadets to enable clear radio communications.
Definition
Absolute characteristics
ESP is defined to meet psychological needs of the learners and how they will respond to temptations (Maslow's hierarchy of needs).
ESP makes use of underlying methodology and activities of the discipline it serves.
ESP is centered on the language appropriate to these activities in terms of grammar, lexis, register, study skills, discourse and genre.
Variable characteristics
Strevens' (1988)
ESP may be, but is not necessarily:
Restricted as to the language skills to be learned (e.g. reading only);
Not taught according to any pre-ordained methodology (pp. 1–2)
Dudley-Evans & St John (1998)
ESP may be related to or designed for specific disciplines;(Dabong, 2019)
ESP may use, in specific teaching situations, a different methodology from that of general English;
ESP is likely to be designed for adult learners, either at a tertiary level institution or in a professional work situation. It could, however, be for learners at secondary school level;
ESP is generally designed for intermediate or advanced students;
Most ESP courses assume some basic knowledge of the language system, but it can be used with beginners (pp. 4–5)
Teaching
ESP is taught in many universities of the world. Many professional associations of teachers of English (e.g., TESOL and IATEFL) have ESP sections. Much attention is devoted to ESP course design. ESP teaching has much in common with English as a foreign or second language and English for academic purposes (EAP). Quickly developing Business English can be considered as part of a larger concept of English for specific purposes.
ESP is different from standard English teaching in the fact that the one doing the teaching not only has to be proficient in standard English, but they also must be knowledgeable in a technical field. When doctors of foreign countries learn English, they need to learn the names of their tools, naming conventions, and methodologies of their profession before one can ethically perform surgery. ESP courses for medicine would be relevant for any medical profession, just as how learning electrical engineering would be beneficial to a foreign engineer. Some ESP scholars recommend a "two layer" ESP course: the first covering all generic knowledge in the specific field of study, and then a second layer that would focus on the specifics of the specialization of the individual.
See also
Test of English for Aviation
EAP – English for academic purposes
English for Specific Purposes World (online journal)
Functional English
References
Notes
Hutchinson, T. & A. Francisco. 1987. English for Specific Purposes: A learning-centered approach. Cambridge: Cambridge University Press.
Eric.ed.gov, Dudley-Evans, Tony. An Overview of ESP in the 1990s. In: The Japan Conference on English for Specific Purposes Proceedings (Aizuwakamatsu City, Fukushima, Japan, November 8, 1997)
Amazon.co.uk, Dudley-Evans, Tony (1998). Developments in English for Specific Purposes: A multi-disciplinary approach. Cambridge University Press.
Ideas and Options in English for Specific Purposes 2006 Developmentalpsychologyarena.com, Helen Basturkmen. Ideas and Options in English for Specific Purposes. Published by: Routledge, 2005
Eric.ed.gov, The Apitong 3rd floor on English for Specific Purposes Proceedings (Aizuwakamatsu City, Fukushima, November 8, 1997) Orr, Thomas, Ed.
External links
Organizations
Tesol.org, TESOL's ESP Interest Section and the ESP discussion list
Espsig.iatefl.org, IATEFL ESP Special Interest Group
UNAV.es, IATEFL ESP SIG Website
IESPTA - International ESP Teachers' Association (it is the only association for ESP Teachers)
Articles
Esp-world.info, Hewings, M. 2002. A history of ESP through 'English for Specific Purposes'.
Iteslj.org, Kristen Gatehouse. Key Issues in English for Specific Purposes (ESP) Curriculum Development. The Internet TESL Journal.
Antlab.sci.waseda.ac.jp, Laurence Anthony. English for Specific Purposes: What does it mean? Why is it different?
Journals
Asian ESP Journal
Elsevier.com, English for Specific Purposes An International Research Journal
Journal of Teaching English for Specific and Academic Purposes
Magazines
ESP Professional -It is the only magazine for ESP Teachers worldwide.
Official Site
English Learning Official Site
English-language education | 0.769299 | 0.98774 | 0.759867 |
Coaching | Coaching is a form of development in which an experienced person, called a coach, supports a learner or client in achieving a specific personal or professional goal by providing training and guidance. The learner is sometimes called a coachee. Occasionally, coaching may mean an informal relationship between two people, of whom one has more experience and expertise than the other and offers advice and guidance as the latter learns; but coaching differs from mentoring by focusing on specific tasks or objectives, as opposed to more general goals or overall development.
Origins
The word "coaching" originated in the 16th century and initially referred to a method of transportation, specifically a horse-drawn carriage. It derived from the Hungarian word "kocsi," which meant a carriage from the village of Kocs, known for producing high-quality carriages. Over time, the term "coaching" transitioned from its literal transportation context to metaphorically represent the process of guiding and supporting individuals in their personal and professional development.
The first use of the term "coach" in connection with an instructor or trainer arose around 1830 in Oxford University slang for a tutor who "carried" a student through an exam. The word "coaching" thus identified a process used to transport people from where they are to where they want to be. The first use of the term in relation to sports came in 1861.
History
Historically the development of coaching has been influenced by many fields of activity, including adult education, the Human Potential Movement in the 1960s, large-group awareness training (LGAT) groups (such as Erhard Seminars Training, founded in 1971), leadership studies, personal development, and various subfields of psychology. The University of Sydney offered the world's first coaching psychology unit of study in January 2000, and various academic associations and academic journals for coaching psychology were established in subsequent years (see ).
Applications
Coaching is applied in fields such as sports, performing arts (singers get vocal coaches), acting (drama coaches and dialect coaches), business, education, health care, and relationships (for example, dating coaches).
Coaches use a range of communication skills (such as targeted restatements, listening, questioning, clarifying, etc.) to help clients shift their perspectives and thereby discover different approaches to achieve their goals. These skills can be used in almost all types of coaching. In this sense, coaching is a form of "meta-profession" that can apply to supporting clients in any human endeavor, ranging from their concerns in health, personal, professional, sport, social, family, political, spiritual dimensions, etc. There may be some overlap between certain types of coaching activities. Coaching approaches are also influenced by cultural differences.
Attention deficit hyperactivity disorder (ADHD)
The concept of ADHD coaching was introduced in 1994 by psychiatrists Edward M. Hallowell and John J. Ratey in their book Driven to Distraction. ADHD coaching is a specialized type of life coaching that uses techniques designed to assist individuals with attention-deficit hyperactivity disorder by mitigating the effects of executive function deficit, which is a common impairment for people with ADHD. Coaches work with clients to help them better manage time, organize, set goals, and complete projects. In addition to assisting clients understand the impact of ADHD on their lives, coaches can help them develop "workaround" strategies to deal with specific challenges, and determine and use individual strengths. Coaches also help clients get a better grasp of what reasonable expectations are for them as individuals since people with ADHD "brain wiring" often seem to need external "mirrors" for self-awareness about their potential despite their impairment.
Business and executive
Business coaching is a type of human resource development for executives, members of management, teams, and leadership. It provides positive support, feedback, and advice on an individual or group basis to improve personal effectiveness in the business setting, many a time focusing on behavioral changes through psychometrics or 360-degree feedback for example. Business coaching is also called executive coaching, corporate coaching or leadership coaching. Coaches help their clients advance towards specific professional goals. These include career transition, interpersonal and professional communication, performance management, organizational effectiveness, managing career, and personal changes, developing executive presence, building credibility, enhancing strategic thinking, dealing effectively with conflict, facing work challenges and making swift and sound decisions, leading a change and building an effective team within an organization. An industrial-organizational psychologist may work as an executive coach.
Business coaching is not restricted to external experts or providers. Many organizations expect their senior leaders and middle managers to coach their team members to reach higher levels of performance, increased job satisfaction, personal growth, and career development. Research studies suggest that executive coaching has positive effects both within workplace performance as well as personal areas outside the workplace, with some differences in the impact of internal and external coaches.
In some countries, there is no licensing required to be a business or executive coach, and membership of a coaching organization is optional. Further, standards and methods of training coaches can vary widely between coaching organizations. Many business coaches refer to themselves as consultants, a broader business relationship than one which exclusively involves coaching. Research findings from a systematic review indicate that effective coaches are known for having integrity, support for those they coach, communication skills, and credibility.
In the workplace, leadership coaching has been shown to be effective for increasing employee confidence in expressing their own ideas. Research findings in a systematic review demonstrate that coaching can help reduce stress in the workplace.
Career
Career coaching focuses on work and career and is similar to career counseling. Career coaching is not to be confused with life coaching, which concentrates on personal development. Another common term for a career coach is "career guide".
Christian
A Christian coach is not a pastor or counselor (although the coach may also be qualified in those disciplines), but someone who has been professionally trained to address specific coaching goals from a distinctively Christian or biblical perspective.
Co-coaching
Co-coaching is a structured practice of coaching between peers with the goal of learning improved coaching techniques.
Dating
Dating coaches offer coaching and related products and services to improve their clients' success in dating and relationships.
Financial
Financial coaching is a relatively new form of coaching that focuses on helping clients overcome their struggle to attain specific financial goals and aspirations they have set for themselves. Financial coaching is a one-on-one relationship in which the coach works to provide encouragement and support aimed at facilitating attainment of the client's economic plans. A financial coach, also called money coach, typically focuses on helping clients to restructure and reduce debt, reduce spending, develop saving habits, and develop fiscal discipline. In contrast, the term financial adviser refers to a broader range of professionals who typically provide clients with financial products and services. Although early research links financial coaching to improvements in client outcomes, much more rigorous analysis is necessary before any causal linkages can be established.
Health and wellness
Health coaching is becoming recognized as a new way to help individuals "manage" their illnesses and conditions, especially those of a chronic nature. The coach will use special techniques, personal experience, expertise and encouragement to assist the coachee in bringing his/her behavioral changes about while aiming for lowered health risks and decreased healthcare costs. The National Society of Health Coaches (NSHC) has differentiated the term health coach from wellness coach. According to the NSHC, health coaches are qualified "to guide those with acute or chronic conditions and/or moderate to high health risk", and wellness coaches provide guidance and inspiration "to otherwise 'healthy' individuals who desire to maintain or improve their overall general health status".
Homework
Homework coaching focuses on equipping a student with the study skills required to succeed academically. This approach is different from regular tutoring which typically seeks to improve a student's performance in a specific subject.
In education
Coaching is applied to support students, faculty, and administrators in educational organizations. For students, opportunities for coaching include collaborating with fellow students to improve grades and skills, both academic and social; for teachers and administrators, coaching can help with transitions into new roles.
Life
Life coaching is the process of helping people identify and achieve personal goals through developing skills and attitudes that lead to self-empowerment. Life coaching generally deals with issues such as procrastination, fear of failure, relationships' issues, lack of confidence, work–life balance and career changes, and often occurs outside the workplace setting. Systematic academic psychological engagement with life coaching dates from the 1980s.
Skeptics have criticized life coaching's focus on self-improvement for its potential for commercializing friendships and other human relationships.
The business practices of the life coach industry have also stirred controversy. Unlike a psychotherapist, there is no required training, occupational licensing, or regulatory oversight for life coaching. Anyone can claim to be a life coach, and anyone can start a business selling "certificates" to would-be life coaches. Most life coaches in the US find that there is relatively low demand for the services they offer, and it ends up being a part-time side hustle rather than a full career. Many pay for expensive classes in the hope that it will make them more marketable, leading critics to suggest that the most profitable area of the field is in training the would-be life coaches, rather than being a life coach.
Relationship
Relationship coaching is the application of coaching to personal and business relationships.
Sports
In sports, a coach is an individual that provides supervision and training to the sports team or individual players. Sports coaches are involved in administration, athletic training, competition coaching, and representation of the team and the players. A survey in 2019 of the literature on sports coaching found an increase in the number of publications and most articles featured a quantitative research approach. Sports psychology emerged from the 1890s.
Esports
In esports, coaches are often responsible for planning game strategies and assisting in player development. For example, in the League of Legends World Championship, the head coach is responsible for advising players during the pick–ban phase of the game via voice-chat and during the intermission between matches.
Vocal
A vocal coach, also known as a voice coach (though this term often applies to those working with speech and communication rather than singing), is a music teacher, usually a piano accompanist, who helps singers prepare for a performance, often also helping them to improve their singing technique and take care of and develop their voice, but is not the same as a singing teacher (also called a "voice teacher"). Vocal coaches may give private music lessons or group workshops or masterclasses to singers. They may also coach singers who are rehearsing on stage, or who are singing during a recording session.
Writing
A writing coach helps writers—such as students, journalists, and other professionals—improve their writing and productivity.
Ethics and standards
Since the mid-1990s, coaching professional associations have worked towards developing training standards. Psychologist Jonathan Passmore noted in 2016:
One of the challenges in the field of coaching is upholding levels of professionalism, standards, and ethics. To this end, coaching bodies and organizations have codes of ethics and member standards. However, because these bodies are not regulated, and because coaches do not need to belong to such a body, ethics and standards are variable in the field. In February 2016, the AC and the EMCC launched a "Global Code of Ethics" for the entire industry; individuals, associations, and organizations are invited to become signatories to it.
Many coaches have little training in comparison to the training requirements of some other helping professions: for example, licensure as a counseling psychologist in the State of California requires 3,000 hours of supervised professional experience. Some coaches are both certified coaches and licensed counseling psychologists, integrating coaching and counseling.
Critics see life coaching as akin to psychotherapy but without the legal restrictions and state regulation of psychologists. There are no state regulations/licensing requirements for coaches. Due to lack of regulation, people who have no formal training or certification can legally call themselves life or wellness coaches.
See also
List of counseling topics
List of psychotherapies
References | 0.762728 | 0.996247 | 0.759865 |
Anthropocentrism | Anthropocentrism (; ) is the belief that human beings are the central or most important entity on the planet. The term can be used interchangeably with humanocentrism, and some refer to the concept as human supremacy or human exceptionalism. From an anthropocentric perspective, humankind is seen as separate from nature and superior to it, and other entities (animals, plants, minerals, etc.) are viewed as resources for humans to use.
It is possible to distinguish between at least three types of anthropocentrism: perceptual anthropocentrism (which "characterizes paradigms informed by sense-data from human sensory organs"); descriptive anthropocentrism (which "characterizes paradigms that begin from, center upon, or are ordered around Homo sapiens / ‘the human'"); and normative anthropocentrism (which "characterizes paradigms that make assumptions or assertions about the superiority of Homo sapiens, its capacities, the primacy of its values, [or] its position in the universe").
Anthropocentrism tends to interpret the world in terms of human values and experiences. It is considered to be profoundly embedded in many modern human cultures and conscious acts. It is a major concept in the field of environmental ethics and environmental philosophy, where it is often considered to be the root cause of problems created by human action within the ecosphere.
However, many proponents of anthropocentrism state that this is not necessarily the case: they argue that a sound long-term view acknowledges that the global environment must be made continually suitable for humans and that the real issue is shallow anthropocentrism.
Environmental philosophy
Some environmental philosophers have argued that anthropocentrism is a core part of a perceived human drive to dominate or "master" the Earth. Anthropocentrism is believed by some to be the central problematic concept in environmental philosophy, where it is used to draw attention to claims of a systematic bias in traditional Western attitudes to the non-human world that shapes humans' sense of self and identities. Val Plumwood argued that anthropocentrism plays an analogous role in green theory to androcentrism in feminist theory and ethnocentrism in anti-racist theory. Plumwood called human-centredness "anthrocentrism" to emphasise this parallel.
One of the first extended philosophical essays addressing environmental ethics, John Passmore's Man's Responsibility for Nature has been criticised by defenders of deep ecology because of its anthropocentrism, often claimed to be constitutive of traditional Western moral thought. Indeed, defenders of anthropocentrism concerned with the ecological crisis contend that the maintenance of a healthy, sustainable environment is necessary for human well-being as opposed to for its own sake. According to William Grey, the problem with a "shallow" viewpoint is not that it is human-centred: "What's wrong with shallow views is not their concern about the well-being of humans, but that they do not really consider enough in what that well-being consists. According to this view, we need to develop an enriched, fortified anthropocentric notion of human interest to replace the dominant short-term, sectional and self-regarding conception." In turn, Plumwood in Environmental Culture: The Ecological Crisis of Reason argued that Grey's anthropocentrism is inadequate.
Many devoted environmentalists encompass a somewhat anthropocentric-based philosophical view supporting the fact that they will argue in favor of saving the environment for the sake of human populations. Grey writes: "We should be concerned to promote a rich, diverse, and vibrant biosphere. Human flourishing may certainly be included as a legitimate part of such a flourishing." Such a concern for human flourishing amidst the flourishing of life as a whole, however, is said to be indistinguishable from that of deep ecology and biocentrism, which has been proposed as both an antithesis of anthropocentrism and as a generalised form of anthropocentrism.
Judaeo–Christian traditions
In the 1985 CBC series "A Planet For the Taking", David Suzuki explored the Old Testament roots of anthropocentrism and how it shaped human views of non-human animals. Some Christian proponents of anthropocentrism base their belief on the Bible, such as the verse 1:26 in the Book of Genesis:
The use of the word "dominion" in the Genesis has been used to justify an anthropocentric worldview, but recently some have found it controversial, viewing it as possibly a mistranslation from the Hebrew. However an argument can be made that the Bible actually places all the importance on God as creator, and humans as merely another part of creation.
Moses Maimonides, a Torah scholar who lived in the twelfth century AD, was renowned for his staunch opposition to anthropocentrism. He referred to humans as "just a drop in the bucket" and asserted that "humans are not the axis of the world". He also claimed that anthropocentric thinking is what leads humans to believe in the existence of evil things in nature. According to Rabbi Norman Lamm, Moses Maimonides "refuted the exaggerated ideas about the importance of man and urged us to abandon these fantasies.
Catholic social teaching sees the pre-eminence of human beings over the rest of creation in terms of service rather than domination. Pope Francis, in his 2015 encyclical letter Laudato si' , notes that "an obsession with denying any pre-eminence to the human person" endangers the concern which should be shown to protecting and upholding the welfare of all people, which he argues should rank alongside the "care for our common home" which is the subject of his letter. In the same text he acknowledges that "a mistaken understanding" of Christian belief "has at times led us to justify mistreating nature, to exercise tyranny over creation": in such actions, Christian believers have "not [been] faithful to the treasures of wisdom which we have been called to protect and preserve. In his follow-up exhortation, Laudate Deum (2023) he refers to a preferable understanding of "the unique and central value of the human being amid the marvellous concert of all God's creatures" as a "situated anthropocentrism".
Human rights
Anthropocentrism is the grounding for some naturalistic concepts of human rights. Defenders of anthropocentrism argue that it is the necessary fundamental premise to defend universal human rights, since what matters morally is simply being human. For example, noted philosopher Mortimer J. Adler wrote, "Those who oppose injurious discrimination on the moral ground that all human beings, being equal in their humanity, should be treated equally in all those respects that concern their common humanity, would have no solid basis in fact to support their normative principle." Adler is stating here that denying what is now called human exceptionalism could lead to tyranny, writing that if humans ever came to believe that they do not possess a unique moral status, the intellectual foundation of their liberties collapses: "Why, then, should not groups of superior men be able to justify their enslavement, exploitation, or even genocide of inferior human groups on factual and moral grounds akin to those we now rely on to justify our treatment of the animals we harness as beasts of burden, that we butcher for food and clothing, or that we destroy as disease-bearing pests or as dangerous predators?"
Author and anthropocentrism defender Wesley J. Smith from the Discovery Institute has written that human exceptionalism is what gives rise to human duties to each other, the natural world, and to treat animals humanely. Writing in A Rat is a Pig is a Dog is a Boy, a critique of animal rights ideology, "Because we are unquestionably a unique species—the only species capable of even contemplating ethical issues and assuming responsibilities—we uniquely are capable of apprehending the difference between right and wrong, good and evil, proper and improper conduct toward animals. Or to put it more succinctly, if being human isn't what requires us to treat animals humanely, what in the world does?"
Moral status of animals
Anthropocentrism is closely related to the notion of speciecism, defined by Richard D. Ryder as a "a prejudice or attitude of bias in favour of the interests of members of one's own species and against those of members of other species". One of the earliest of these critics was J. Howard Moore, who in The Universal Kinship (1906) argued that Charles Darwin's On the Origin of Species (1859) "sealed the doom" of anthropocentrism.
While humans cognition is relatively advanced, many traits traditionally used to justify humanity exceptionalism (such as rationality, emotional complexity and social bonds) are not unique to humans. Research in ethology has shown that non-human animals, such as primates, elephants, and cetaceans, also demonstrate complex social structures, emotional depth, and problem-solving abilities. This challenges the claim that humans possess qualities absent in other animals, and which would justify denying moral status to them.
Animal welfare proponents attribute moral consideration to all sentient animals, proportional to their ability to have positive or negative mental experiences. It is notably associated with the ethical theory of utilitarianism, which aims to maximize well-being. It is notably defended by Peter Singer. According to David Pearce, "other things being equal, equally strong interests should count equally." Jeremy Bentham is also known for raising early the issue of animal welfare, arguing that "the question is not, Can they reason? nor, Can they talk? but, Can they suffer?". Animal welfare proponents can in theory accept animal exploitation if the benefits outweigh the harms. But in practice, they generally consider that intensive animal farming causes a massive amount of suffering that outweighs the relatively minor benefit that humans get from consuming animals.
Animal rights proponents argue that all animals have inherent rights, similar to human rights, and should not be used as means to human ends. Unlike animal welfare advocates, who focus on minimizing suffering, animal rights supporters often call for the total abolition of practices that exploit animals, such as intensive animal farming, animal testing, and hunting. Prominent figures like Tom Regan argue that animals are "subjects of a life" with inherent value, deserving moral consideration regardless of the potential benefits humans may derive from using them.
Cognitive psychology
In cognitive psychology, the term anthropocentric thinking has been defined as "the tendency to reason about unfamiliar biological species or processes by analogy to humans." Reasoning by analogy is an attractive thinking strategy, and it can be tempting to apply one's own experience of being human to other biological systems. For example, because death is commonly felt to be undesirable, it may be tempting to form the misconception that death at a cellular level or elsewhere in nature is similarly undesirable (whereas in reality programmed cell death is an essential physiological phenomenon, and ecosystems also rely on death). Conversely, anthropocentric thinking can also lead people to underattribute human characteristics to other organisms. For instance, it may be tempting to wrongly assume that an animal that is very different from humans, such as an insect, will not share particular biological characteristics, such as reproduction or blood circulation.
Anthropocentric thinking has predominantly been studied in young children (mostly up to the age of 10) by developmental psychologists interested in its relevance to biology education. Children as young as 6 have been found to attribute human characteristics to species unfamiliar to them (in Japan), such as rabbits, grasshoppers or tulips. Although relatively little is known about its persistence at a later age, evidence exists that this pattern of human exceptionalist thinking can continue through young adulthood at least, even among students who have been increasingly educated in biology.
The notion that anthropocentric thinking is an innate human characteristic has been challenged by study of American children raised in urban environments, among whom it appears to emerge between the ages of 3 and 5 years as an acquired perspective. Children's recourse to anthropocentric thinking seems to vary with their experience of nature, and cultural assumptions about the place of humans in the natural world. For example, whereas young children who kept goldfish were found to think of frogs as being more goldfish-like, other children tended to think of frogs in terms of humans. More generally, children raised in rural environments appear to use anthropocentric thinking less than their urban counterparts because of their greater familiarity with different species of animals and plants. Studies involving children from some of the indigenous peoples of the Americas have found little use of anthropocentric thinking. Study of children among the Wichí people in South America showed a tendency to think of living organisms in terms of their perceived taxonomic similarities, ecological considerations, and animistic traditions, resulting in a much less anthropocentric view of the natural world than is experienced by many children in Western societies.
In popular culture
In fiction from all eras and societies, there is fiction depicting the actions of humans to ride, eat, milk, and otherwise treat (non-human) animals as inferior. There are occasional fictional exceptions, such as talking animals as aberrations to the rule distinguishing people from animals.
In science fiction, humanocentrism is the idea that humans, as both beings and as a species, are the superior sentients. Essentially the equivalent of racial supremacy on a galactic scale, it entails intolerant discrimination against sentient non-humans, much like race supremacists discriminate against those not of their race. A prime example of this concept is utilized as a story element for the Mass Effect series. After humanity's first contact results in a brief war, many humans in the series develop suspicious or even hostile attitudes towards the game's various alien races. By the time of the first game, which takes place several decades after the war, many humans still retain such sentiments in addition to forming 'pro-human' organizations.
This idea is countered by anti-humanism. At times, this ideal also includes fear of and superiority over strong AIs and cyborgs, downplaying the ideas of integration, cybernetic revolts, machine rule and Tilden's Laws of Robotics.
Mark Twain mocked the belief in human supremacy in Letters from the Earth (written c. 1909, published 1962).
The Planet of the Apes franchise focuses on the analogy of apes becoming the dominant species in society and the fall of humans (see also human extinction). In the 1968 film, Taylor, a human states "take your stinking paws off me, you damn dirty ape!". In the 2001 film, this is contrasted with Attar (a gorilla)'s quote "take your stinking hands off me, you damn dirty human!". This links in with allusions that in becoming the dominant species apes are becoming more like humans (anthropomorphism). In the film Battle for the Planet of the Apes, Virgil, an orangutan states "ape has never killed ape, let alone an ape child. Aldo has killed an ape child. The branch did not break. It was cut with a sword." in reference to planned murder; a stereotypical human concept. Additionally, in Dawn of the Planet of the Apes, Caesar states "I always think...ape better than human. I see now...how much like them we are."
In George Orwell's novel Animal Farm, this theme of anthropocentrism is also present. Whereas originally the animals planned for liberation from humans and animal equality, as evident from the "seven commandments" such as "whatever goes upon two legs is an enemy", "Whatever goes upon four legs, or has wings, is a friend", "All animals are equal"; the pigs would later abridge the commandments with statements such as "All animals are equal, but some animals are more equal than others", and "Four legs good, two legs better."
The 2012 documentary The Superior Human? systematically analyzes anthropocentrism and concludes that value is fundamentally an opinion, and since life forms naturally value their own traits, most humans are misled to believe that they are actually more valuable than other species. This natural bias, according to the film, combined with a received sense of comfort and an excuse for exploitation of non-humans cause anthropocentrism to remain in society.
In his 2009 book Eating Animals, Jonathan Foer describes anthropocentrism as "The conviction that humans are the pinnacle of evolution, the appropriate yardstick by which to measure the lives of other animals, and the rightful owners of everything that lives."
See also
References
Further reading
Bertalanffy, Ludwig Von (1993) General System Theory: Foundations, Development, Applications pp. 239–48
Boddice, Rob (ed.) (2011) Anthropocentrism: Humans, Animals, Environments Leiden and Boston: Brill
Mylius, Ben (2018). "Three Types of Anthropocentrism". Environmental Philosophy 15 (2):159-194.
White, Lynn Townsend, Jr, "The Historical Roots of Our Ecologic Crisis", Science, Vol 155 (Number 3767), 10 March 1967, pp 1203–1207
Human supremacism: why are animal rights activists still the "orphans of the left"?. New Statesman America. April 30, 2019.
Human Supremacy: The Source of All Environmental Crises? Psychology Today December 25, 2021
Animal ethics
Environmental ethics
Posthumanism
Philosophical theories | 0.76229 | 0.996817 | 0.759864 |
Feldenkrais Method | The Feldenkrais Method (FM) is a type of movement therapy devised by Israeli Moshé Feldenkrais (1904–1984) during the mid-20th century. The method is claimed to reorganize connections between the brain and body and so improve body movement and psychological state.
There is no conclusive evidence for any medical benefits of the therapy. However, researchers do not believe FM poses serious risks.
Description
The Feldenkrais Method is a type of alternative movement therapy that proponents claim can repair impaired connections between the motor cortex and the body, so benefiting the quality of body movement and improving wellbeing. Practitioners view it as a form of somatic education "that integrates the body, mind and psyche through an educational model in which a trained Feldenkrais practitioner guides a client (the ‘student’) through movements with hands-on and verbally administered cues," according to Clinical Sports Medicine.
The Feldenkrais Guild of North America claims that the Feldenkrais method allows people to "rediscover [their] innate capacity for graceful, efficient movement" and that "These improvements will often generalize to enhance functioning in other aspects of [their] life".
The Oxford Handbook of Music Performance describes FM as "an experiential learning process that uses movement and guided attention to develop and refine self-awareness." It notes that FM is "increasingly used among high-level performers, such as musicians, actors, dancers, and athletes."
Feldenkrais lessons have two types, one verbally guided and practiced in groups called Awareness Through Movement, and one hands-on and practiced one-to-one called Functional Integration. Moshé Feldenkrais wrote, "The purpose of these sensorimotor lessons is to refine one’s ability to make perceptual distinctions between movements that are easy and pleasurable and those that are strained and uncomfortable, which results in the discovery of new movement possibilities as well as potential for further improvements."
Five Principles
FM operates broadly within five principles:
Learning is a process: "relies on sensory and kinesthetic information that one experiences through interactions with the environment"
Posture as dynamic equilibrium: "the ability to regain equilibrium after a large disturbance"
Exploratory versus performative movement:" the ability to make distinctions in the ease and quality of movement and to try out movements that may be unfamiliar"
Whole versus part learning: "exploring component parts of an action as well as the whole"
Repetition and variation: "introducing novelty in learning in order to expand possibilities for choice"
Effectiveness
In 2015, the Australian Government's Department of Health published the results of a review of 17 natural therapies that sought to determine which would continue being covered by health insurance; the Feldenkrais Method was one of 16 therapies for which no clear evidence of effectiveness was found. Accordingly in 2017 the Australian government identified the Feldenkrais Method as a practice that would not qualify for insurance subsidy, saying this step would "ensure taxpayer funds are expended appropriately and not directed to therapies lacking evidence".
Proponents claim that the Feldenkrais Method can benefit people with a number of medical conditions, including children with autism, and people with multiple sclerosis. However, no studies in which participants were clearly identified as having an autism spectrum disorder or developmental disabilities have been presented to back these claims.
There is limited evidence that workplace-based use of the Feldenkrais Method may help aid rehabilitation of people with upper limb complaints.
A 2022 report on the effectiveness of the Feldenkrais Method by the German Institute for Quality and Efficiency in Health Care found a "hint" of benefit for people with Parkinson's disease, compared to a passive lecture program. Evidence for helping chronic low back pain was inconsistent. The report found no evidence for long-term benefit of FM, or benefit for other conditions. It concluded, "The question about the benefit of the Feldenkrais method in comparison with active strategies such as extensive physiotherapy generally remains open. Overall, little evidence is available. From an ethical perspective, the absence of evidence from RCTs is problematic for informed decision making but does not constitute evidence of an absent benefit. Only 2 small, ongoing RCTs of questionable relevance were identified, and therefore, the availability of evidence is not expected to change in the short term."
Criticism
David Gorski has written that the Method bears similarities to faith healing, is like "glorified yoga", and that it "borders on quackery". Quackwatch places the Feldenkrais Method on its list of "Unnaturalistic methods".
History
From the 1950s till his death in 1984, Feldenkrais taught in his home city of Tel Aviv. He gained recognition in part through media accounts of his work with prominent individuals, including Israeli Prime Minister David Ben-Gurion.
In David Kaetz's biography, Making Connections: Roots and Resonance in the Life of Moshe Feldenkrais (2007), he argues many lines of influence can be found between the Judaism of Feldenkrais's upbringing and the Feldenkrais Method – for instance, the use of paradox as a pedagogical tool.
Making Connections described Feldenkrais' approach:
Feldenkrais was critical of the appropriation of the term 'energy' to express immeasurable phenomena or to label experiences that people had trouble describing ... He was impatient when someone invoked energy in pseudoscientific 'explanations' that masked a lack of understanding. In such cases he urged skepticism and scientific discourse. He encouraged empirical and phenomenological narratives that could lead to insights.
Beginning in the late 1950s, Feldenkrais made trips to teach in Europe and America. Several hundred people became certified Feldenkrais practitioners through trainings he held in San Francisco from 1975 to 1978 and in Amherst, Massachusetts, from 1980 to 1984.
Cybernetics, also known as dynamic systems theory, continued to influence the Feldenkrais Method in the 1990s through the work of human development researcher Esther Thelen.:1535
See also
Alexander Technique
Rolfing
Yoga
Tai Chi
References
Further reading
Contemporary
Classic
External links
International Feldenkrais Federation (IFF)
Alternative medicine
Mind–body interventions
Physical exercise
Postural awareness techniques
Somatics | 0.762909 | 0.995989 | 0.759848 |
Groupthink | Groupthink is a psychological phenomenon that occurs within a group of people in which the desire for harmony or conformity in the group results in an irrational or dysfunctional decision-making outcome. Cohesiveness, or the desire for cohesiveness, in a group may produce a tendency among its members to agree at all costs. This causes the group to minimize conflict and reach a consensus decision without critical evaluation.
Groupthink is a construct of social psychology but has an extensive reach and influences literature in the fields of communication studies, political science, management, and organizational theory, as well as important aspects of deviant religious cult behaviour.
Overview
Groupthink is sometimes stated to occur (more broadly) within natural groups within the community, for example to explain the lifelong different mindsets of those with differing political views (such as "conservatism" and "liberalism" in the U.S. political context or the purported benefits of team work vs. work conducted in solitude). However, this conformity of viewpoints within a group does not mainly involve deliberate group decision-making, and might be better explained by the collective confirmation bias of the individual members of the group.
The term was coined in 1952 by William H. Whyte Jr. Most of the initial research on groupthink was conducted by Irving Janis, a research psychologist from Yale University. Janis published an influential book in 1972, which was revised in 1982. Janis used the Bay of Pigs disaster (the failed invasion of Castro's Cuba in 1961) and the Japanese attack on Pearl Harbor in 1941 as his two prime case studies. Later studies have evaluated and reformulated his groupthink model.
Groupthink requires individuals to avoid raising controversial issues or alternative solutions, and there is loss of individual creativity, uniqueness and independent thinking. The dysfunctional group dynamics of the "ingroup" produces an "illusion of invulnerability" (an inflated certainty that the right decision has been made). Thus the "ingroup" significantly overrates its own abilities in decision-making and significantly underrates the abilities of its opponents (the "outgroup"). Furthermore, groupthink can produce dehumanizing actions against the "outgroup". Members of a group can often feel under peer pressure to "go along with the crowd" for fear of "rocking the boat" or of how their speaking out will be perceived by the rest of the group. Group interactions tend to favor clear and harmonious agreements and it can be a cause for concern when little to no new innovations or arguments for better policies, outcomes and structures are called to question. (McLeod). Groupthink can often be referred to as because group activities and group projects in general make it extremely easy to pass on not offering constructive opinions.
Some methods that have been used to counteract group think in the past is selecting teams from more diverse backgrounds, and even mixing men and women for groups (Kamalnath). Groupthink can be considered by many to be a detriment to companies, organizations and in any work situations. Most positions that are senior level need individuals to be independent in their thinking. There is a positive correlation found between outstanding executives and decisiveness (Kelman). Groupthink also prohibits an organization from moving forward and innovating if no one ever speaks up and says something could be done differently.
Antecedent factors such as group cohesiveness, faulty group structure, and situational context (e.g., community panic) play into the likelihood of whether or not groupthink will impact the decision-making process.
History
William H. Whyte Jr. derived the term from George Orwell's Nineteen Eighty-Four, and popularized it in 1952 in Fortune magazine:
Groupthink was Whyte's diagnosis of the malaise affecting both the study and practice of management (and, by association, America) in the 1950s. Whyte was dismayed that employees had subjugated themselves to the tyranny of groups, which crushed individuality and were instinctively hostile to anything or anyone that challenged the collective view.
American psychologist Irving Janis (Yale University) pioneered the initial research on the groupthink theory. He does not cite Whyte, but coined the term again by analogy with "doublethink" and similar terms that were part of the newspeak vocabulary in the novel Nineteen Eighty-Four by George Orwell. He initially defined groupthink as follows:
He went on to write:
Janis set the foundation for the study of groupthink starting with his research in the American Soldier Project where he studied the effect of extreme stress on group cohesiveness. After this study he remained interested in the ways in which people make decisions under external threats. This interest led Janis to study a number of "disasters" in American foreign policy, such as failure to anticipate the Japanese attack on Pearl Harbor (1941); the Bay of Pigs Invasion fiasco (1961); and the prosecution of the Vietnam War (1964–67) by President Lyndon Johnson. He concluded that in each of these cases, the decisions occurred largely because of groupthink, which prevented contradictory views from being expressed and subsequently evaluated.
After the publication of Janis' book Victims of Groupthink in 1972, and a revised edition with the title Groupthink: Psychological Studies of Policy Decisions and Fiascoes in 1982, the concept of groupthink was used to explain many other faulty decisions in history. These events included Nazi Germany's decision to invade the Soviet Union in 1941, the Watergate scandal and others. Despite the popularity of the concept of groupthink, fewer than two dozen studies addressed the phenomenon itself following the publication of Victims of Groupthink, between the years 1972 and 1998. This was surprising considering how many fields of interests it spans, which include political science, communications, organizational studies, social psychology, management, strategy, counseling, and marketing. One can most likely explain this lack of follow-up in that group research is difficult to conduct, groupthink has many independent and dependent variables, and it is unclear "how to translate [groupthink's] theoretical concepts into observable and quantitative constructs".
Nevertheless, outside research psychology and sociology, wider culture has come to detect groupthink in observable situations, for example:
" [...] critics of Twitter point to the predominance of the hive mind in such social media, the kind of groupthink that submerges independent thinking in favor of conformity to the group, the collective"
"[...] leaders often have beliefs which are very far from matching reality and which can become more extreme as they are encouraged by their followers. The predilection of many cult leaders for abstract, ambiguous, and therefore unchallengeable ideas can further reduce the likelihood of reality testing, while the intense milieu control exerted by cults over their members means that most of the reality available for testing is supplied by the group environment. This is seen in the phenomenon of 'groupthink', alleged to have occurred, notoriously, during the Bay of Pigs fiasco."
"Groupthink by Compulsion [...] [G]roupthink at least implies voluntarism. When this fails, the organization is not above outright intimidation. [...] In [a nationwide telecommunications company], refusal by the new hires to cheer on command incurred consequences not unlike the indoctrination and brainwashing techniques associated with a Soviet-era gulag."
Symptoms
To make groupthink testable, Irving Janis devised eight symptoms indicative of groupthink:
Type I: Overestimations of the group — its power and moralityIllusions of invulnerability creating excessive optimism and encouraging risk taking.Unquestioned belief in the morality of the group, causing members to ignore the consequences of their actions.
Type II: Closed-mindednessRationalizing warnings that might challenge the group's assumptions.Stereotyping those who are opposed to the group as weak, evil, biased, spiteful, impotent, or stupid.
Type III: Pressures toward uniformitySelf-censorship of ideas that deviate from the apparent group consensus.Illusions of unanimity among group members, silence is viewed as agreement.Direct pressure to conform placed on any member who questions the group, couched in terms of "disloyalty".Mindguards— self-appointed members who shield the group from dissenting information.
When a group exhibits most of the symptoms of groupthink, the consequences of a failing decision process can be expected: incomplete analysis of the other options, incomplete analysis of the objectives, failure to examine the risks associated with the favored choice, failure to reevaluate the options initially rejected, poor information research, selection bias in available information processing, failure to prepare for a back-up plan.
Causes
Irving Janis identified three antecedent conditions to groupthink:
High group cohesiveness: Cohesiveness is the main factor that leads to groupthink. Groups that lack cohesiveness can of course make bad decisions, but they do not experience groupthink. In a cohesive group, members avoid speaking out against decisions, avoid arguing with others, and work towards maintaining friendly relationships in the group. If cohesiveness gets to such a level that there are no longer disagreements between members, then the group is ripe for groupthink.
Deindividuation: Group cohesiveness becomes more important than individual freedom of expression.
Illusions of unanimity: Members perceive falsely that everyone agrees with the group's decision; silence is seen as consent. Janis noted that the unity of group members was mere illusion. Members may disagree with the organizations' decision, but go along with the group for many reasons, such as maintaining their group status and avoiding conflict with managers or workmates. Such members think that suggesting opinions contrary to others may lead to isolation from the group.
Structural faults: The group is organized in ways that disrupt the communication of information, or the group carelessly makes decisions.
Insulation of the group: This can promote the development of unique, inaccurate perspectives on issues the group is dealing with, which can then lead to faulty solutions to the problem.
Lack of impartial leadership: Leaders control the group discussion, by planning what will be discussed, allowing only certain questions to be asked, and asking for opinions of only certain people in the group. Closed-style leadership is when leaders announce their opinions on the issue before the group discusses the issue together. Open-style leadership is when leaders withhold their opinion until a later time in the discussion. Groups with a closed-style leader are more biased in their judgments, especially when members had a high degree of certainty.
Lack of norms requiring methodological procedures.
Homogeneity of members' social backgrounds and ideology.
Situational context:
Highly stressful external threats: High-stake decisions can create tension and anxiety; group members may cope with this stress in irrational ways. Group members may rationalize their decision by exaggerating the positive consequences and minimizing the possible negative consequences. In attempt to minimize the stressful situation, the group decides quickly and allows little to no discussion or disagreement. Groups under high stress are more likely to make errors, lose focus of the ultimate goal, and use procedures that members know have not been effective in the past.
Recent failures: These can lead to low self-esteem, resulting in agreement with the group for fear of being seen as wrong.
Excessive difficulties in decision-making tasks.
Time pressures: Group members are more concerned with efficiency and quick results than with quality and accuracy. Time pressures can also lead group members to overlook important information.
Moral dilemmas.
Although it is possible for a situation to contain all three of these factors, all three are not always present even when groupthink is occurring. Janis considered a high degree of cohesiveness to be the most important antecedent to producing groupthink, and always present when groupthink was occurring; however, he believed high cohesiveness would not always produce groupthink. A very cohesive group abides with all group norms; but whether or not groupthink arises is dependent on what the group norms are. If the group encourages individual dissent and alternative strategies to problem solving, it is likely that groupthink will be avoided even in a highly cohesive group. This means that high cohesion will lead to groupthink only if one or both of the other antecedents is present, situational context being slightly more likely than structural faults to produce groupthink.
A 2018 study found that absence of a tenured Project leader can also create conditions for groupthink to prevail. Presence of an ‘experienced’ project manager can reduce the likelihood of groupthink by taking steps like critically analysing ideas, promoting open communication, encouraging diverse perspectives, and raising team awareness of groupthink symptoms.
It was found that among people who have Bicultural identity, those with highly integrated Bicultural identity as opposed to less integrated were more prone to groupthink. In another 2022 study in Tanzania, Hofstede’s cultural dimensions come into play. It was observed that in high power distance societies, individuals are hesitant to voice dissent, deferring to leaders' preferences in making decisions. Furthermore, as Tanzania is a collectivist society, community interests supersede those of individuals. The combination of high power distance & collectivism creates optimal conditions for groupthink to occur.
Prevention
As observed by Aldag and Fuller (1993), the groupthink phenomenon seems to rest on a set of unstated and generally restrictive assumptions:
The purpose of group problem solving is mainly to improve decision quality
Group problem solving is considered a rational process.
Benefits of group problem solving:
variety of perspectives
more information about possible alternatives
better decision reliability
dampening of biases
social presence effects
Groupthink prevents these benefits due to structural faults and provocative situational context
Groupthink prevention methods will produce better decisions
An illusion of well-being is presumed to be inherently dysfunctional.
Group pressures towards consensus lead to concurrence-seeking tendencies.
It has been thought that groups with the strong ability to work together will be able to solve dilemmas in a quicker and more efficient fashion than an individual. Groups have a greater amount of resources which lead them to be able to store and retrieve information more readily and come up with more alternative solutions to a problem. There was a recognized downside to group problem solving in that it takes groups more time to come to a decision and requires that people make compromises with each other. However, it was not until the research of Janis appeared that anyone really considered that a highly cohesive group could impair the group's ability to generate quality decisions. Tight-knit groups may appear to make decisions better because they can come to a consensus quickly and at a low energy cost; however, over time this process of decision-making may decrease the members' ability to think critically. It is, therefore, considered by many to be important to combat the effects of groupthink.
According to Janis, decision-making groups are not necessarily destined to groupthink. He devised ways of preventing groupthink:
Leaders should assign each member the role of "critical evaluator". This allows each member to freely air objections and doubts.
Leaders should not express an opinion when assigning a task to a group.
Leaders should absent themselves from many of the group meetings to avoid excessively influencing the outcome.
The organization should set up several independent groups, working on the same problem.
All effective alternatives should be examined.
Each member should discuss the group's ideas with trusted people outside of the group.
The group should invite outside experts into meetings. Group members should be allowed to discuss with and question the outside experts.
At least one group member should be assigned the role of devil's advocate. This should be a different person for each meeting.
The devil's advocate in a group may provide questions and insight which contradict the majority group in order to avoid groupthink decisions. A study by Ryan Hartwig confirms that the devil's advocacy technique is very useful for group problem-solving. It allows for conflict to be used in a way that is most-effective for finding the best solution so that members will not have to go back and find a different solution if the first one fails. Hartwig also suggests that the devil's advocacy technique be incorporated with other group decision-making models such as the functional theory to find and evaluate alternative solutions. The main idea of the devil's advocacy technique is that somewhat structured conflict can be facilitated to not only reduce groupthink, but to also solve problems.
Diversity of all kinds is also instrumental in preventing groupthink. Individuals with varying backgrounds, thought, professional & life experiences etc. can offer unique perspectives & challenge assumptions. In a 2004 study, a diverse team of problem-solver outperformed a team consisting of best problem solvers as they start to think alike.
Psychological safety, emphasized by Edmondson & Lei and Hirak et al., is crucial for effective group performance. It involves creating an environment that encourages learning and removes barriers perceived as threats by team members. Edmondson et al. demonstrated variations in psychological safety based on work type, hierarchy, and leadership effectiveness, highlighting its importance in employee development and fostering a culture of learning within organizations.
A similar term to groupthink is the Abilene paradox, another phenomenon that is detrimental when working in groups. When organizations fall into the Abilene paradox, they take actions in contradiction to what their perceived goals may be and therefore defeat the very purposes they are trying to achieve. Failure to communicate desires or beliefs can cause the Abilene paradox.
Examples
The Watergate scandal is an example of this. Before the scandal had occurred, a meeting took place where they discussed the issue. One of Nixon's campaign aides was unsure if he should speak up and give his input. If he had voiced his disagreement with the group's decision, it is possible that the scandal could have been avoided.
After the Bay of Pigs invasion fiasco, President John F. Kennedy sought to avoid groupthink during the Cuban Missile Crisis using "vigilant appraisal". During meetings, he invited outside experts to share their viewpoints, and allowed group members to question them carefully. He also encouraged group members to discuss possible solutions with trusted members within their separate departments, and he even divided the group up into various sub-groups, to partially break the group cohesion. Kennedy was deliberately absent from the meetings, so as to avoid pressing his own opinion.
Cass Sunstein reports that introverts can sometimes be silent in meetings with extroverts; he recommends explicitly asking for each person's opinion, either during the meeting or afterwards in one-on-one sessions. Sunstein points to studies showing groups with a high level of internal socialization and happy talk are more prone to bad investment decisions due to groupthink, compared with groups of investors who are relative strangers and more willing to be argumentative. To avoid group polarization, where discussion with like-minded people drives an outcome further to an extreme than any of the individuals favored before the discussion, he recommends creating heterogeneous groups which contain people with different points of view. Sunstein also points out that people arguing a side they do not sincerely believe (in the role of devil's advocate) tend to be much less effective than a sincere argument. This can be accomplished by dissenting individuals, or a group like a Red Team that is expected to pursue an alternative strategy or goal "for real".
Empirical findings and meta-analysis
Testing groupthink in a laboratory is difficult because synthetic settings remove groups from real social situations, which ultimately changes the variables conducive or inhibitive to groupthink. Because of its subjective nature, researchers have struggled to measure groupthink as a complete phenomenon, instead frequently opting to measure its particular factors. These factors range from and focus on group and situational aspects.
Park (1990) found that "only 16 empirical studies have been published on groupthink", and concluded that they "resulted in only partial support of his [Janis's] hypotheses". Park concludes, "despite Janis' claim that group cohesiveness is the major necessary antecedent factor, no research has shown a significant main effect of cohesiveness on groupthink." Park also concludes that research does not support Janis' claim that cohesion and leadership style interact to produce groupthink symptoms. Park presents a summary of the results of the studies analyzed. According to Park, a study by Huseman and Drive (1979) indicates groupthink occurs in both small and large decision-making groups within businesses. This results partly from group isolation within the business. Manz and Sims (1982) conducted a study showing that autonomous work groups are susceptible to groupthink symptoms in the same manner as decisions making groups within businesses. Fodor and Smith (1982) produced a study revealing that group leaders with high power motivation create atmospheres more susceptible to groupthink.Fodor, Eugene M.; Smith, Terry, Jan 1982, The power motive as an influence on group decision making, Journal of Personality and Social Psychology, Vol 42(1), 178–185. doi: 10.1037/0022-3514.42.1.178 Leaders with high power motivation possess characteristics similar to leaders with a "closed" leadership style—an unwillingness to respect dissenting opinion. The same study indicates that level of group cohesiveness is insignificant in predicting groupthink occurrence. Park summarizes a study performed by Callaway, Marriott, and Esser (1985) in which groups with highly dominant members "made higher quality decisions, exhibited lowered state of anxiety, took more time to reach a decision, and made more statements of disagreement/agreement". Overall, groups with highly dominant members expressed characteristics inhibitory to groupthink. If highly dominant members are considered equivalent to leaders with high power motivation, the results of Callaway, Marriott, and Esser contradict the results of Fodor and Smith. A study by Leana (1985) indicates the interaction between level of group cohesion and leadership style is completely insignificant in predicting groupthink.Carrie, R. Leana (1985). A partial test of Janis' Groupthink Model: Effects of group cohesiveness and leader behavior on defective decision making, "Journal of Management", vol. 11(1), 5–18. doi: 10.1177/014920638501100102 This finding refutes Janis' claim that the factors of cohesion and leadership style interact to produce groupthink. Park summarizes a study by McCauley (1989) in which structural conditions of the group were found to predict groupthink while situational conditions did not. The structural conditions included group insulation, group homogeneity, and promotional leadership. The situational conditions included group cohesion. These findings refute Janis' claim about group cohesiveness predicting groupthink.
Overall, studies on groupthink have largely focused on the factors (antecedents) that predict groupthink. Groupthink occurrence is often measured by number of ideas/solutions generated within a group, but there is no uniform, concrete standard by which researchers can objectively conclude groupthink occurs. The studies of groupthink and groupthink antecedents reveal a mixed body of results. Some studies indicate group cohesion and leadership style to be powerfully predictive of groupthink, while other studies indicate the insignificance of these factors. Group homogeneity and group insulation are generally supported as factors predictive of groupthink.
Case studies
Politics and military
Groupthink can have a strong hold on political decisions and military operations, which may result in enormous wastage of human and material resources. Highly qualified and experienced politicians and military commanders sometimes make very poor decisions when in a suboptimal group setting. Scholars such as Janis and Raven attribute political and military fiascoes, such as the Bay of Pigs Invasion, the Vietnam War, and the Watergate scandal, to the effect of groupthink. More recently, Dina Badie argued that groupthink was largely responsible for the shift in the U.S. administration's view on Saddam Hussein that eventually led to the 2003 invasion of Iraq by the United States. After the September 11 attacks, "stress, promotional leadership, and intergroup conflict" were all factors that gave rise to the occurrence of groupthink. Political case studies of groupthink serve to illustrate the impact that the occurrence of groupthink can have in today's political scene.
Bay of Pigs invasion and the Cuban Missile Crisis
The United States Bay of Pigs Invasion of April 1961 was the primary case study that Janis used to formulate his theory of groupthink. The invasion plan was initiated by the Eisenhower administration, but when the Kennedy administration took over, it "uncritically accepted" the plan of the Central Intelligence Agency (CIA). When some people, such as Arthur M. Schlesinger Jr. and Senator J. William Fulbright, attempted to present their objections to the plan, the Kennedy team as a whole ignored these objections and kept believing in the morality of their plan. Eventually Schlesinger minimized his own doubts, performing self-censorship. The Kennedy team stereotyped Fidel Castro and the Cubans by failing to question the CIA about its many false assumptions, including the ineffectiveness of Castro's air force, the weakness of Castro's army, and the inability of Castro to quell internal uprisings.
Janis argued the fiasco that ensued could have been prevented if the Kennedy administration had followed the methods to preventing groupthink adopted during the Cuban Missile Crisis, which took place just one year later in October 1962. In the latter crisis, essentially the same political leaders were involved in decision-making, but this time they learned from their previous mistake of seriously under-rating their opponents.
Pearl Harbor
The attack on Pearl Harbor on December 7, 1941, is a prime example of groupthink. A number of factors such as shared illusions and rationalizations contributed to the lack of precaution taken by U.S. Navy officers based in Hawaii. The United States had intercepted Japanese messages and they discovered that Japan was arming itself for an offensive attack somewhere in the Pacific Ocean. Washington took action by warning officers stationed at Pearl Harbor, but their warning was not taken seriously. They assumed that the Empire of Japan was taking measures in the event that their embassies and consulates in enemy territories were usurped.
The U.S. Navy and Army in Pearl Harbor also shared rationalizations about why an attack was unlikely. Some of them included:
"The Japanese would never dare attempt a full-scale surprise assault against Hawaii because they would realize that it would precipitate an all-out war, which the United States would surely win."
"The Pacific Fleet concentrated at Pearl Harbor was a major deterrent against air or naval attack."
"Even if the Japanese were foolhardy to send their carriers to attack us [the United States], we could certainly detect and destroy them in plenty of time."
"No warships anchored in the shallow water of Pearl Harbor could ever be sunk by torpedo bombs launched from enemy aircraft."
Space Shuttle Challenger disaster
On January 28, 1986, NASA launched the space shuttle Challenger. This was significant because a civilian, non-astronaut, high school teacher was to be the first American civilian in space. The space shuttle was perceived to be so safe as to make this possible. NASA's engineering and launch teams rely on teamwork. To launch the shuttle, individual team members must affirm each system is functioning nominally. Morton Thiokol engineers who designed and built the Challengers rocket boosters ignored warnings that cooler temperature during the day of the launch could result in failure and death of the crew. The Space Shuttle Challenger Disaster grounded space shuttle flights for nearly three years. Ironic that this particular flight was to be a demonstration showing confidence in the safety of the space shuttle technology.
The Challenger case was subject to a more quantitatively oriented test of Janis's groupthink model performed by Esser and Lindoerfer, who found clear signs of positive antecedents to groupthink in the critical decisions concerning the launch of the shuttle. The day of the launch was rushed for publicity reasons. NASA wanted to captivate and hold the attention of America. Having civilian teacher Christa McAuliffe on board to broadcast a live lesson, and the possible mention by president Ronald Reagan in the State of the Union address, were opportunities NASA deemed critical to increasing interest in its potential civilian space flight program. The schedule NASA set out to meet was, however, self-imposed. It seemed incredible to many that an organization with a perceived history of successful management would have locked itself into a schedule it had no chance of meeting.
Corporate world
In the corporate world, ineffective and suboptimal group decision-making can negatively affect the health of a company and cause a considerable amount of monetary loss.
Swissair
Aaron Hermann and Hussain Rammal illustrate the detrimental role of groupthink in the collapse of Swissair, a Swiss airline company that was thought to be so financially stable that it earned the title the "Flying Bank". The authors argue that, among other factors, Swissair carried two symptoms of groupthink: the belief that the group is invulnerable and the belief in the morality of the group. In addition, before the fiasco, the size of the company board was reduced, subsequently eliminating industrial expertise. This may have further increased the likelihood of groupthink. With the board members lacking expertise in the field and having somewhat similar background, norms, and values, the pressure to conform may have become more prominent. This phenomenon is called group homogeneity, which is an antecedent to groupthink. Together, these conditions may have contributed to the poor decision-making process that eventually led to Swissair's collapse.
Marks & Spencer and British Airways
Another example of groupthink from the corporate world is illustrated in the United Kingdom-based companies Marks & Spencer and British Airways. The negative impact of groupthink took place during the 1990s as both companies released globalization expansion strategies. Researcher Jack Eaton's content analysis of media press releases revealed that all eight symptoms of groupthink were present during this period. The most predominant symptom of groupthink was the illusion of invulnerability as both companies underestimated potential failure due to years of profitability and success during challenging markets. Up until the consequence of groupthink erupted they were considered blue chips and darlings of the London Stock Exchange. During 1998–1999 the price of Marks & Spencer shares fell from 590 to less than 300 and that of British Airways from 740 to 300. Both companies had previously been prominently featured in the UK press and media for more positive reasons, reflecting national pride in their undeniable sector-wide performance.
Sports
Recent literature of groupthink attempts to study the application of this concept beyond the framework of business and politics. One particularly relevant and popular arena in which groupthink is rarely studied is sports. The lack of literature in this area prompted Charles Koerber and Christopher Neck to begin a case-study investigation that examined the effect of groupthink on the decision of the Major League Umpires Association (MLUA) to stage a mass resignation in 1999. The decision was a failed attempt to gain a stronger negotiating stance against Major League Baseball. Koerber and Neck suggest that three groupthink symptoms can be found in the decision-making process of the MLUA. First, the umpires overestimated the power that they had over the baseball league and the strength of their group's resolve. The union also exhibited some degree of closed-mindedness with the notion that MLB is the enemy. Lastly, there was the presence of self-censorship; some umpires who disagreed with the decision to resign failed to voice their dissent. These factors, along with other decision-making defects, led to a decision that was suboptimal and ineffective.
Recent developments
Ubiquity model
Researcher Robert Baron (2005) contends that the connection between certain antecedents which Janis believed necessary has not been demonstrated by the current collective body of research on groupthink. He believes that Janis' antecedents for groupthink are incorrect, and argues that not only are they "not necessary to provoke the symptoms of groupthink, but that they often will not even amplify such symptoms". As an alternative to Janis' model, Baron proposed a ubiquity model of groupthink. This model provides a revised set of antecedents for groupthink, including social identification, salient norms, and low self-efficacy.
General group problem-solving (GGPS) model
Aldag and Fuller (1993) argue that the groupthink concept was based on a "small and relatively restricted sample" that became too broadly generalized. Furthermore, the concept is too rigidly staged and deterministic. Empirical support for it has also not been consistent. The authors compare groupthink model to findings presented by Maslow and Piaget; they argue that, in each case, the model incites great interest and further research that, subsequently, invalidate the original concept. Aldag and Fuller thus suggest a new model called the general group problem-solving (GGPS) model, which integrates new findings from groupthink literature and alters aspects of groupthink itself. The primary difference between the GGPS model and groupthink is that the former is more value neutral and more political.
Reexamination
Later scholars have re-assessed the merit of groupthink by reexamining case studies that Janis originally used to buttress his model. Roderick Kramer (1998) believed that, because scholars today have a more sophisticated set of ideas about the general decision-making process and because new and relevant information about the fiascos have surfaced over the years, a reexamination of the case studies is appropriate and necessary. He argues that new evidence does not support Janis' view that groupthink was largely responsible for President Kennedy's and President Johnson's decisions in the Bay of Pigs Invasion and U.S. escalated military involvement in the Vietnam War, respectively. Both presidents sought the advice of experts outside of their political groups more than Janis suggested. Kramer also argues that the presidents were the final decision-makers of the fiascos; while determining which course of action to take, they relied more heavily on their own construals of the situations than on any group-consenting decision presented to them. Kramer concludes that Janis' explanation of the two military issues is flawed and that groupthink has much less influence on group decision-making than is popularly believed.
Groupthink, while it is thought to be avoided, does have some positive effects. Choi and Kim found that group identity traits such as believing in the group's moral superiority, were linked to less concurrence seeking, better decision-making, better team activities, and better team performance. This study also showed that the relationship between groupthink and defective decision making was insignificant. These findings mean that in the right circumstances, groupthink does not always have negative outcomes. It also questions the original theory of groupthink.
Reformulation
Scholars are challenging the original view of groupthink proposed by Janis.
Whyte (1998) argues that a group's collective efficacy, i.e. confidence in its abilities, can lead to reduced vigilance and a higher risk tolerance, similar to how groupthink was described. McCauley (1998) proposes that the attractiveness of group members might be the most prominent factor in causing poor decisions. Turner and Pratkanis (1991) suggest that from social identity perspective, groupthink can be seen as a group's attempt to ward off potentially negative views of the group. Together, the contributions of these scholars have brought about new understandings of groupthink that help reformulate Janis' original model.
Sociocognitive theory
According to a theory many of the basic characteristics of groupthink – e.g., strong cohesion, indulgent atmosphere, and exclusive ethos – are the result of a special kind of mnemonic encoding (Tsoukalas, 2007). Members of tightly knit groups have a tendency to represent significant aspects of their community as episodic memories and this has a predictable influence on their group behavior and collective ideology, as opposed to what happens when they are encoded as semantic memories (which is common in formal and more loose group formations).
See also
Abilene paradox
Amity-enmity complex
Asch conformity experiments
Bandwagon effect
Brainwashing
Collective intelligence
Collective narcissism
Democratic centralism
Dunning–Kruger effect
Echo chamber (media)
Emotional contagion
False consensus effect
Filter bubble
Group flow
Group polarization
Group-serving bias
Groupshift
Herd behaviour
Homophily
In-group favoritism
Individualism
Lollapalooza effect
Mass psychology
Moral Man and Immoral Society
No soap radio
Mob rule
Organizational dissent
Positive psychology (relevantly, its criticism)
Preference falsification
Realistic conflict theory
Risky shift
Scapegoating
Social comparison theory
Solidarity
Spiral of silence
System justification
Team error
Three men make a tiger
Tone policing
Tuckman's stages of group development
Vendor lock-in
Wishful thinking
Woozle effect
Diversity
Cultural diversity
Multiculturalism
References
Further reading
Articles
Books
Martin, Everett Dean, The Behavior of Crowds, A Psychological Study'', Harper & Brothers Publishers, New York, 1920.
Conformity
Group processes
Consensus
Cognitive biases
Error | 0.762362 | 0.9967 | 0.759846 |
The Chalice and the Blade | The Chalice and The Blade: Our History, Our Future is a 1987 book by Riane Eisler. The author presents a conceptual framework for studying social systems with particular attention to how a society constructs roles and relations between the female and male halves of humanity.
Overview
Eisler highlights the tension between what she calls the dominator or domination model and the more naturally feminine partnership model. Eisler proposes tension between these two underlies the span of human cultural evolution. She traces this tension in Western culture from prehistory to the present.
The book closes with two contrasting future scenarios. These challenge conventional views about cultural evolution up to the time of the book's publication. The book is now in 26 foreign editions, including most European languages as well as Chinese, Japanese, Urdu, Korean, Arabic, Hebrew, and Turkish. Briefly, her thesis is that, despite old narratives about an inherently flawed humanity, more and more evidence shows humanity is not doomed to perpetuate patterns of violence and oppression. Female values offer a partnership alternative with deep roots in the pre-Patriarchy paradigm of cultural evolution. No utopia is predicted; rather, a way of structuring society in more peaceful, equitable, and sustainable ways is envisioned.
Proposed method of social analysis
The method of social analysis in the book is multidisciplinary in its study of relational dynamics. In contrast to earlier studies of society, this method concerns what kinds of social systems support the human capacity for consciousness, caring, and creativity, or conversely for insensitivity, cruelty, and destructiveness.
The study of relational dynamics is an application of systems analysis: the study of how different components of living systems interact to maintain one another and the larger whole of which they are a part. Drawing from a trans-disciplinary database, it applies this approach to a wide-ranging exploration of how humans think, feel, and behave individually and in groups. Its sources include cross-cultural anthropological and sociological surveys, and studies of individual societies as well as writings by historians, analyses of laws, moral codes, art, literature, scholarship from psychology, economics, education, political science, philosophy, religious studies, archaeology, the study of myths and legends; and data from more recent fields such as primatology, neuroscience, chaos theory, systems self-organizing theory, non-linear dynamics, gender studies, women's studies, and men's studies.
A distinguishing feature of the study of relational dynamics pays particular attention to matters marginalized or ignored in conventional male-oriented studies. It highlights the importance of how a society constructs relations between the male and female halves of humanity, as well as between them and their daughters and sons, taking into account findings from both the biological and social sciences showing the critical importance of the "private" sphere of family and other intimate relations in shaping beliefs and behaviors.
New perspective on cultural evolution
The author compares two underlying types of social organization in which the cultural construction of gender roles and relations is key. Eisler places human societies on what she calls the partnership-domination continuum. At one end of the continuum are societies oriented to the partnership model. At the other are societies oriented to the dominator or domination model. These categories transcend conventional categories such as ancient vs. modern, Eastern vs. Western, religious vs. secular, rightist vs. leftist, and so on.
The domination model ranks man over man, man over woman, race over race, and religion vs. religion, with difference equated with superiority or inferiority. It comprises an authoritarian structure in both family and state or tribe, rigid male dominance, and a high degree of abuse and violence. The partnership model consists of a democratic and egalitarian structure in both the family and state or tribe, with hierarchies of actualization where power is empowering rather than disempowering (as in hierarchies of domination). There is also gender partnership and a low degree of abuse and violence, as it is not needed to maintain rigid top-down rankings.
Content
In this book, Eisler traces tensions between these two models, starting in prehistory. It draws from many sources, including the study of myth and linguistics as well as archeological findings by the Indo-Europeanists J. P. Mallory and Marija Gimbutas and archeologists such as James Mellaart, Alexander Marshack, Andre Leroi-Gourhan, and Nikolas Platon.
Based on these findings, Eisler presents evidence how for the longest span of prehistory, cultures in the more fertile regions of the globe oriented primarily to the partnership model, which Eisler also calls a "gylany", a neologism for a society in which relationships between the sexes are an egalitarian partnership. This gender partnership was a core component of a more egalitarian, peaceful, and matrifocal culture with a focus on life-giving, centering on nurture. These societies once were widespread in Europe around the Mediterranean, and lasted well into the early Bronze Age in the Minoan civilization of Crete.
Later, culture skewed towards Patriarchy during a chaotic time of upheaval related to climate change and incursions of warlike, nomadic tribes. These peoples brought with them a domination system and imposed rigid rankings of domination, including the rigid domination by men of women and the equation of "real masculinity" with power and violence. This led to radical cultural transformation.
Eisler's book is not the only work describing this massive cultural shift. Other scholars have paid special attention to a radical change in gender relations. Historian Gerda Lerner details it in her Oxford University book The Creation of Patriarchy.
However, Eisler does not use the term "patriarchy." Nor does she use "matriarchy" to describe a more gender-balanced society, noting rule by fathers (patriarchy) and rule by mothers (matriarchy) can be two sides of a dominator coin. She proposed the real alternative is a partnership system or gylany.
Nonetheless, some critics have accused Eisler of writing about a "matriarchy" in prehistoric times. According to them, she claims earlier societies where women were not subordinate were ideal. Eisler does point out how more partnership-oriented societies described in The Chalice and the Blade were more peaceful and generally equitable; yet, she emphasizes they were not ideal. She further makes it clear the point is not returning to any "utopia" but rather using what we learn from our past to move forward to a more equitable and sustainable future.
Some archaeologists also question these earlier societies were more peaceful, especially critiques of Marija Gimbutas, one of Eisler's sources. This critique fits the conventional narrative of cultural evolution as a linear progression from "barbarism" to "civilization"—a narrative Eisler challenges in light of the brutality of "civilizations" ranging from Chinese, Indian, Arab, and European empires to Nazi Germany and Stalin's Soviet Union.
In addition, some archaeologists question whether the great profusion in these earlier cultures of female figurines, going back 30,000 years and perhaps even longer, indicates that they venerated a Goddess or Great Mother. When these figurines were first excavated in the 19th century, the men who found them in millennia-old caves called them Venus figurines (a term still used today).
Subsequent confirmation
Further confirmation of Eisler's view of Neolithic society comes from archeologist Ian Hodder, who excavated Çatalhöyük, one of the largest Neolithic sites found to date. Hodder confirms gender equity as a key part of a more partnership-oriented social configuration in this generally equitable early farming site where there are no signs of destruction through warfare for over 1,000 years. At the same time, Ian Hodder found little evidence of matriarchy (matrilineality) in society. The only place where he found a clear division in symbolism is the kind of activity. Women were more often depicted with plants, and men in the role of a hunter. It is also worth noting that Hodder, unlike Eisler, described in detail male symbolism, namely painting, paintings in which men were most often depicted, often with beards.
In this 2004 Scientific American article Hodder writes—
Even analyses of isotopes in bones give no indication of divergence in lifestyle translating into differences in status and power between women and men... [which points to] a society in which sex is relatively unimportant in assigning social roles, with neither burials nor space in houses suggesting gender inequality.
Data from other world regions also supports the thesis of an earlier partnership direction. For example, after The Chalice and the Blade was published in China by the Chinese Academy of Social Sciences, a group of scholars at the Academy wrote a book showing there was also in Chinese prehistory a massive cultural shift from more partnership-oriented cultures to a system of rigid domination in both the family and the state.
See also
Futures studies
References
1987 non-fiction books
English-language books
Gender studies books
Çatalhöyük | 0.769258 | 0.987667 | 0.759771 |
Online community | An online community, also called an internet community or web community, is a community whose members interact with each other primarily via the Internet. Members of the community usually share common interests. For many, online communities may feel like home, consisting of a "family of invisible friends". Additionally, these "friends" can be connected through gaming communities and gaming companies. Those who wish to be a part of an online community usually have to become a member via a specific site and thereby gain access to specific content or links.
An online community can act as an information system where members can post, comment on discussions, give advice or collaborate, and includes medical advice or specific health care research as well. Commonly, people communicate through social networking sites, chat rooms, forums, email lists, and discussion boards, and have advanced into daily social media platforms as well. This includes Facebook, Twitter, Instagram, Discord, etc. People may also join online communities through video games, blogs, and virtual worlds, and could potentially meet new significant others in dating sites or dating virtual worlds.
The rise in popularity of Web 2.0 websites has allowed for easier real-time communication and connection to others and facilitated the introduction of new ways for information to be exchanged. Yet, these interactions may also lead to a downfall of social interactions or deposit more negative and derogatory forms of speaking to others, in connection, surfaced forms of racism, bullying, sexist comments, etc. may also be investigated and linked to online communities.
One scholarly definition of an online community is this: "a virtual community is defined as an aggregation of individuals or business partners who interact around a shared interest, where the interaction is at least partially supported or mediated by technology (or both) and guided by some protocols or norms".
Purpose
Digital communities (web communities but also communities that are formed over, e.g., Xbox and PlayStation) provide a platform for a range of services to users. It has been argued that they can fulfill Maslow's hierarchy of needs. They allow for social interaction across the world between people of different cultures who might not otherwise have met with offline meetings also becoming more common. Another key use of web communities is access to and the exchange of information. With communities for even very small niches it is possible to find people also interested in a topic and to seek and share information on a subject where there are not such people available in the immediate area offline. This has led to a range of popular sites based on areas such as health, employment, finances and education. Online communities can be vital for companies for marketing and outreach.
Unexpected and innovative uses of web communities have also emerged with social networks being used in conflicts to alert citizens of impending attacks. The UN sees the web and specifically social networks as an important tool in conflicts and emergencies.
Web communities have grown in popularity; 6 of the 20 most-trafficked websites were community-based sites. The amount of traffic to such websites is expected to increase as a growing proportion of the world's population attains Internet access.
Categorization
The idea of a community is not a new concept. On the telephone, in ham radio and in the online world, social interactions no longer have to be based on proximity; instead they can literally be with anyone anywhere. The study of communities has had to adapt along with the new technologies. Many researchers have used ethnography to attempt to understand what people do in online spaces, how they express themselves, what motivates them, how they govern themselves, what attracts them, and why some people prefer to observe rather than participate. Online communities can congregate around a shared interest and can be spread across multiple websites.
Some features of online communities include:
Content: articles, information, and news about a topic of interest to a group of people.
Forums or newsgroups and email: so that community members can communicate in delayed fashion.
Chat and instant messaging: so that community members can communicate more immediately.
Development
Online communities typically establish a set of values, sometimes known collectively as netiquette or Internet etiquette, as they grow. These values may include: opportunity, education, culture, democracy, human services, equality within the economy, information, sustainability, and communication. An online community's purpose is to serve as a common ground for people who share the same interests.
Online communities may be used as calendars to keep up with events such as upcoming gatherings or sporting events. They also form around activities and hobbies. Many online communities relating to health care help inform, advise, and support patients and their families. Students can take classes online and they may communicate with their professors and peers online. Businesses have also started using online communities to communicate with their customers about their products and services as well as to share information about the business. Other online communities allow a wide variety of professionals to come together to share thoughts, ideas and theories.
Fandom is an example of what online communities can evolve into. Online communities have grown in influence in "shaping the phenomena around which they organize" according to Nancy K. Baym's work. She says that: "More than any other commercial sector, the popular culture industry relies on online communities to publicize and provide testimonials for their products." The strength of the online community's power is displayed through the season 3 premiere of BBC's Sherlock. Online activity by fans seem to have had a noticeable influence on the plot and direction of the season opening episode. Mark Lawson of The Guardian recounts how fans have, to a degree, directed the outcome of the events of the episode. He says that "Sherlock has always been one of the most web-aware shows, among the first to find a satisfying way of representing electronic chatter on-screen." Fan communities in platforms like Twitter, Instagram, and Reddit around sports, actors, and musicians have become powerful communities both culturally and politically.
Discussions where members may post their feedback are essential in the development of an online community. Online communities may encourage individuals to come together to teach and learn from one another. They may encourage learners to discuss and learn about real-world problems and situations, as well as to focus on such things as teamwork, collaborative thinking and personal experiences.
Blogs
Blogs are among the major platforms on which online communities form. Blogging practices include microblogging, where the amount of information in a single element is smaller, and liveblogging, in which an ongoing event is blogged about in real time.
The ease and convenience of blogging has allowed for its growth. Major blogging platforms include Twitter and Tumblr, which combine social media and blogging, as well as platforms such as WordPress, which allow content to be hosted on their own servers but also permit users to download, install, and modify the software on their own servers. 23.1% of the top 10 million websites are either hosted on or run WordPress.
Forums
Internet forums, sometimes called bulletin boards, are websites which allow users to post topics also known as threads for discussion with other users able to reply creating a conversation. Forums follow a hierarchical structure of categories, with many popular forum software platforms categorising forums depending on their purpose, and allowing forum administrators to create subforums within their platform. With time more advanced features have been added into forums; the ability to attach files, embed YouTube videos, and send private messages is now commonplace. the largest forum Gaia Online contained over 2 billion posts.
Members are commonly assigned into user groups which control their access rights and permissions. Common access levels include the following:
User: A standard account with the ability to create topics and reply.
Moderator: Moderators are typically tasked with the daily administration tasks such as answering user queries, dealing with rule-breaking posts, and the moving, editing or deletion of topics or posts.
Administrator: Administrators deal with the forum strategy including the implementation of new features alongside more technical tasks such as server maintenance.
Social networks
Social networks are platforms allowing users to set up their own profile and build connections with like minded people who pursue similar interests through interaction. The first traceable example of such a site is SixDegrees.com, set up in 1997, which included a friends list and the ability to send messages to members linked to friends and see other users associations. For much of the 21st century, the popularity of such networks has been growing. Friendster was the first social network to gain mass media attention; however, by 2004 it had been overtaken in popularity by Myspace, which in turn was later overtaken by Facebook. In 2013, Facebook attracted 1.23 billion monthly users, rising from 145 million in 2008. Facebook was the first social network to surpass 1 billion registered accounts, and by 2020, had more than 2.7 billion active users. Meta Platforms, the owner of Facebook, also owns three other leading platforms for online communities: Instagram, WhatsApp, and Facebook Messenger.
Most top-ranked social networks originate in the United States, but European services like VK, Japanese platform LINE, or Chinese social networks WeChat, QQ or video-sharing app Douyin (internationally known as TikTok) have also garnered appeal in their respective regions.
Current trends focus around the increased use of mobile devices when using social networks. Statistics from Statista show that, in 2013, 97.9 million users accessed social networks from a mobile device in the United States.
Classification
Researchers and organizations have worked to classify types of online community and to characterise their structure. For example, it is important to know the security, access, and technology requirements of a given type of community as it may evolve from an open to a private and regulated forum. It has been argued that the technical aspects of online communities, such as whether pages can be created and edited by the general user base (as is the case with wikis) or only certain users (as is the case with most blogs), can place online communities into stylistic categories. Another approach argues that "online community" is a metaphor and that contributors actively negotiate the meaning of the term, including values and social norms.
Some research has looked at the users of online communities. Amy Jo Kim has classified the rituals and stages of online community interaction and called it the "membership life cycle". Clay Shirky talks about communities of practice, whose members collaborate and help each other in order to make something better or improve a certain skill. What makes these communities bond is "love" of something, as demonstrated by members who go out of their way to help without any financial interest. Campbell et al. developed a character theory for analyzing online communities, based on tribal typologies. In the communities they investigated they identified three character types:
The Big Man (offer a form of order and stability to the community by absorbing many conflictual situations personally)
The Sorcerer (will not engage in reciprocity with others in the community)
The Trickster (generally a comical yet complex figure that is found in most of the world's culture)
Online communities have also forced retail firms to change their business strategies. Companies have to network more, adjust computations, and alter their organizational structures. This leads to changes in a company's communications with their manufacturers including the information shared and made accessible for further productivity and profits. Because consumers and customers in all fields are becoming accustomed to more interaction and engagement online, adjustments must be considered made in order to keep audiences intrigued.
Online communities have been characterized as "virtual settlements" that have the following four requirements: interactivity, a variety of communicators, a common public place where members can meet and interact, and sustained membership over time. Based on these considerations, it can be said that microblogs such as Twitter can be classified as online communities.
Building communities
Dorine C. Andrews argues, in the article "Audience-Specific Online Community Design", that there are three parts to building an online community: starting the online community, encouraging early online interaction, and moving to a self-sustaining interactive environment. When starting an online community, it may be effective to create webpages that appeal to specific interests. Online communities with clear topics and easy access tend to be most effective. In order to gain early interaction by members, privacy guarantees and content discussions are very important. Successful online communities tend to be able to function self-sufficiently.
Participation
There are two major types of participation in online communities: public participation and non-public participation, also called lurking. Lurkers are participants who join a virtual community but do not contribute. In contrast, public participants, or posters, are those who join virtual communities and openly express their beliefs and opinions. Both lurkers and posters frequently enter communities to find answers and to gather general information. For example, there are several online communities dedicated to technology. In these communities, posters are generally experts in the field who can offer technological insight and answer questions, while lurkers tend to be technological novices who use the communities to find answers and to learn.
In general, virtual community participation is influenced by how participants view themselves in society as well as by norms, both of society and of the online community. Participants also join online communities for friendship and support. In a sense, virtual communities may fill social voids in participants' offline lives.
Sociologist Barry Wellman presents the idea of "globalization" – the Internet's ability to extend participants' social connections to people around the world while also aiding them in further engagement with their local communities.
Roles in an online community
Although online societies differ in content from real society, the roles people assume in their online communities are quite similar. Elliot Volkman points out several categories of people that play a role in the cycle of social networking, such as:
Community architect – Creates the online community, sets goals and decides the purpose of the site.
Community manager – Oversees the progress of the society. Enforces rules, encourages social norms, assists new members, and spreads awareness about the community.
Professional member – This is a member who is paid to contribute to the site. The purpose of this role is to keep the community active.
Free members – These members visit sites most often and represent the majority of the contributors. Their contributions are crucial to the sites' progress.
Passive lurker – These people do not contribute to the site but rather absorb the content, discussion, and advice.
Active lurker – Consumes the content and shares that content with personal networks and other communities.
Power users – These people push for new discussion, provide positive feedback to community managers, and sometimes even act as community managers themselves. They have a major influence on the site and make up only a small percentage of the users.
Aspects of successful online communities
An article entitled "The real value of on-line communities," written by A. Armstrong and John Hagel of the Harvard Business Review, addresses a handful of elements that are key to the growth of an online community and its success in drawing in members. In this example, the article focuses specifically on online communities related to business, but its points can be transferred and can apply to any online community. The article addresses four main categories of business-based online communities, but states that a truly successful one will combine qualities of each of them: communities of transaction, communities of interest, communities of fantasy, and communities of relationship. Anubhav Choudhury describes the four types of community as follows:
Communities of transaction emphasize the importance of buying and selling products in a social online manner where people must interact in order to complete the transaction.
Communities of interest involve the online interaction of people with specific knowledge on a certain topic.
Communities of fantasy encourage people to participate in online alternative forms of reality, such as games where they are represented by avatars.
Communities of relationship often reveal or at least partially protect someone's identity while allowing them to communicate with others, such as in online dating services.
Membership lifecycle
Amy Jo Kim's membership lifecycle theory states that members of online communities begin their life in a community as visitors, or lurkers. After breaking through a barrier, people become novices and participate in community life. After contributing for a sustained period of time, they become regulars. If they break through another barrier they become leaders, and once they have contributed to the community for some time they become elders. This life cycle can be applied to many virtual communities, such as bulletin board systems, blogs, mailing lists, and wiki-based communities like Wikipedia.
A similar model can be found in the works of Lave and Wenger, who illustrate a cycle of how users become incorporated into virtual communities using the principles of legitimate peripheral participation. They suggest five types of trajectories amongst a learning community:
Peripheral (i.e. Lurker) – An outside, unstructured participation
Inbound (i.e. Novice) – Newcomer is invested in the community and heading towards full participation
Insider (i.e. Regular) – Full committed community participant
Boundary (i.e. Leader) – A leader, sustains membership participation and brokers interactions
Outbound (i.e. Elder) – Process of leaving the community due to new relationships, new positions, new outlooks
The following shows the correlation between the learning trajectories and Web 2.0 community participation by using the example of YouTube:
Peripheral (Lurker) – Observing the community and viewing content. Does not add to the community content or discussion. The user occasionally goes onto YouTube.com to check out a video that someone has directed them to.
Inbound (Novice) – Just beginning to engage with the community. Starts to provide content. Tentatively interacts in a few discussions. The user comments on other users' videos. Potentially posts a video of their own.
Insider (Regular) – Consistently adds to the community discussion and content. Interacts with other users. Regularly posts videos. Makes a concerted effort to comment and rate other users' videos.
Boundary (Leader) – Recognized as a veteran participant, their opinions are granted greater consideration by the community. Connects with regulars to make higher-concept ideas. The user has become recognized as a contributor to watch. Their videos may be podcasts commenting on the state of YouTube and its community. The user would not consider watching another user's videos without commenting on them. Will often correct a user in behavior the community considers inappropriate. Will reference other users' videos in their comments as a way to cross link content.
Outbound (Elders) – Leave the community. Their interests may have changed, the community may have moved in a direction that they disagree with, or they may no longer have time to maintain a constant presence in the community.
Newcomers
Newcomers are important for online communities. Online communities rely on volunteers' contribution, and most online communities face high turnover rate as one of their main challenges. For example, only a minority of Wikipedia users contribute regularly, and only a minority of those contributors participate in community discussions. In one study conducted by Carnegie Mellon University, they found that "more than two-thirds (68%) of newcomers to Usenet groups were never seen again after their first post". Above facts reflect a point that recruiting and remaining new members have become a very crucial problem for online communities: the communities will eventually wither away without replacing members who leave.
Newcomers are new members of the online communities and thus often face many barriers when contributing to a project, and those barriers they face might lead them to give up the project or even leave the community. By conducting a systematic literature review over 20 primary studies regarding to the barriers faced by newcomers when contributing to the open source software projects, Steinmacher et al. identified 15 different barriers and they classified those barriers into five categories as described below:
Social Interaction: this category describes the barriers when newcomers interact with existing members of the community. The three barriers that they found have main influence on newcomers are: "lack of social interaction with project members",'"not receiving a timely response", and "receiving an improper response".
Newcomers' Previous Knowledge: this category describes the barriers which is regarding to the newcomers' previous experience related to this project. The three barriers they found classified into this part are: "lack of domain expertise", "lack of technical expertise", and "lack of knowledge of project practices".
Finding a Way to Start: this category describes the issues when newcomers try to start contributing. The two barriers they found are: "Difficulty to find an appropriate task to start with", and "Difficulty to find a mentor".
Documentation: documentation of the project also shown to be barriers for newcomers especially in the Open Source Software projects. The three barriers they found are: "Outdated documentation", "Too much documentation", and "Unclear code comments".
Technical Hurdles: technical barriers are also one of the major issue when newcomers start contributing. This category includes barriers: "Issues setting up a local workspace", "Code complexity" and "Software architecture complexity".
Because of the barriers described above, it is very necessary that online communities engage newcomers and help them to adjust to the new environment. From online communities' side, newcomers can be both beneficial and harmful to online communities. On the one side, newcomers can bring online communities innovative ideas and resources. On the other side, they can also harm communities with misbehavior caused by their unfamiliarity with community norms. Kraut et al. defined five basic issues faced by online communities when dealing with newcomers, and proposed several design claims for each problem in their book Building Successful Online Communities.
Recruitment. Online communities need to keep recruiting new members in the face of high turnover rate of their existing members. Three suggestions are made in the book:
Interpersonal recruitment: recruit new members by old members' personal relationship
Word of mouth recruitment: new members will join in the community because of the word-of-mouth influence from existing member
Impersonal advertisement: although the direct effect is weaker than previous two strategies, impersonal advertising can effectively increase number of people joining among potential members with little prior knowledge of the community.
Selection. Another challenge for online communities is to select the members who are a good fit. Unlike the offline organizations, the problem of selecting right candidates is more problematic for online communities since the anonymity of the users and the ease of creating new identities online. Two approaches are suggested in the book:
Self-selection: make sure that only good fit members will choose to join.
Screen: make sure that only good fit members will allow to join.
Keeping Newcomers Around. Before new members start feel the commitment and do major contribution, they must be around long enough in online communities to learn the norms and form the community attachment. However, the majority of them tend to leave the communities at this period of time. At this period of time, new members are usually very sensitive to either positive or negative evidence they received from group, which may largely impact the users' decision on whether they quit or stay. Authors suggested two approaches:
Entry Barriers: Higher entry barriers will be more likely to drive away new members, but those members who survived from this severe initiation process should have stronger commitment than those members with lower entry barriers.
Interactions with existing members: communicating with existing members and receiving friendly responses from them will encourage the new members' commitment. The existing members are encouraged to treat the newcomers gently. One study by Halfaker et al. suggested that reverting new members' work in Wikipedia will likely to make them leave the communities. Thus, new members are more likely to stay and develop commitment if the interaction between existing member and new members are friendly and gentle. The book suggested different ways, including "introduction threads" in the communities, "assign the responsibilities of having friendly interactions with newcomers to designated older-timers", and "discouraging hostility towards newcomers who make mistakes".
Socialization. Different online communities have their own norms and regulations, and new members need to learn to participate in an appropriate way. Thus, socialization is a process through which new members acquire the behaviors and attitude essential to playing their roles in a group or organization. Previous research in organizational socialization demonstrated that newcomers' active information seeking and organizational socialization tactics are associated with better performance, higher job satisfaction, more committed to the organization, more likely to stay and thus lower turnover rate. However, this institutionalized socialization tactics are not popular used in online setting, and most online communities are still using the individualized socialization tactics where newcomers being socialized individually and in a more informal way in their training process. Thus, in order to keep new members, the design suggestions given by this book are: "using formal, sequential and collective socialization tactics" and "old-timers can provide formal mentorship to newcomers."
Protection. Newcomers are different from the existing members, and thus the influx of newcomers might change the environment or the culture developed by existing members. New members might also behave inappropriately, and thus be potentially harmful to online communities, as a result of their lack of experience. Different communities might also have different level of damage tolerance, some might be more fragile to newcomers' inappropriate behavior (such as open source group collaboration software project) while others are not (such as some discussion forums). So the speed of integrating new members with existing communities really depends on community types and its goals, and groups need to have protection mechanisms that serve to multiple purposes.
Motivations and barriers to participation
Successful online communities motivate online participation. Methods of motivating participation in these communities have been investigated in several studies.
There are many persuasive factors that draw users into online communities. Peer-to-peer systems and social networking sites rely heavily on member contribution. Users' underlying motivations to involve themselves in these communities have been linked to some persuasion theories of sociology.
According to the reciprocation theory, a successful online community must provide its users with benefits that compensate for the costs of time, effort and materials members provide. People often join these communities expecting some sort of reward.
The consistency theory says that once people make a public commitment to a virtual society, they will often feel obligated to stay consistent with their commitment by continuing contributions.
The social validation theory explains how people are more likely to join and participate in an online community if it is socially acceptable and popular.
One of the greatest attractions towards online communities is the sense of connection users build among members. Participation and contribution are influenced when members of an online community are aware of their global audience.
The majority of people learn by example and often follow others, especially when it comes to participation. Individuals are reserved about contributing to an online community for many reasons including but not limited to a fear of criticism or inaccuracy. Users may withhold information that they do not believe is particularly interesting, relevant, or truthful. In order to challenge these contribution barriers, producers of these sites are responsible for developing knowledge-based and foundation-based trust among the community.
Users' perception of audience is another reason that makes users participate in online communities. Results showed that users usually underestimate their amount of audiences in online communities. Social media users guess that their audience is 27% of its real size. Regardless of this underestimation, it is shown that amount of audience affects users' self-presentation and also content production which means a higher level of participation.
There are two types of virtual online communities (VOC): dependent and self-sustained VOCs. The dependent VOCs are those who use the virtual community as extensions of themselves, they interact with people they know. Self-sustained VOCs are communities where relationships between participating members is formed and maintained through encounters in the online community. For all VOCs, there is the issue of creating identity and reputation in the community. People can create whatever identity they would like to through their interactions with other members. The username is what members identify each other by but it says very little about the person behind it. The main features in online communities that attract people are a shared communication environment, relationships formed and nurtured, a sense of belonging to a group, the internal structure of the group, common space shared by people with similar ideas and interests. The three most critical issues are belonging, identity, and interest. For an online community to flourish there needs to be consistent participation, interest, and motivation.
Research conducted by Helen Wang applied the Technology Acceptance Model to online community participation. Internet self-efficacy positively predicted perceived ease of use. Research found that participants' beliefs in their abilities to use the internet and web-based tools determined how much effort was expected. Community environment positively predicted perceived ease of use and usefulness. Intrinsic motivation positively predicted perceived ease of use, usefulness, and actual use. The technology acceptance model positively predicts how likely it is that an individual will participate in an online community.
Consumer-vendor interaction
Establishing a relationship between the consumer and a seller has become a new science with the emergence of online communities. It is a new market to be tapped by companies and to do so, requires an understanding of the relationships built on online communities. Online communities gather people around common interests and these common interests can include brands, products, and services. Companies not only have a chance to reach a new group of consumers in online communities, but to also tap into information about the consumers. Companies have a chance to learn about the consumers in an environment that they feel a certain amount of anonymity and are thus, more open to allowing a company to see what they really want or are looking for.
In order to establish a relationship with the consumer a company must seek a way to identify with how individuals interact with the community. This is done by understanding the relationships an individual has with an online community. There are six identifiable relationship statuses: considered status, committed status, inactive status, faded status, recognized status, and unrecognized status. Unrecognized status means the consumer is unaware of the online community or has not decided the community to be useful. The recognized status is where a person is aware of the community, but is not entirely involved. A considered status is when a person begins their involvement with the site. The usage at this stage is still very sporadic. The committed status is when a relationship between a person and an online community is established and the person gets fully involved with the community. The inactive status is when an online community has not relevance to a person. The faded status is when a person has begun to fade away from a site. It is important to be able to recognize which group or status the consumer holds, because it might help determine which approach to use.
Companies not only need to understand how a consumer functions within an online community, but also a company "should understand the communality of an online community" This means a company must understand the dynamic and structure of the online community to be able to establish a relationship with the consumer. Online communities have cultures of their own, and to be able to establish a commercial relationship or even engage at all, one must understand the community values and proprieties. It has even been proved beneficial to treat online commercial relationships more as friendships rather than business transactions.
Through online engagement, because of the smoke screen of anonymity, it allows a person to be able to socially interact with strangers in a much more personal way. This personal connection the consumer feels translates to how they want to establish relationships online. They separate what is commercial or spam and what is relational. Relational becomes what they associate with human interaction while commercial is what they associate with digital or non-human interaction. Thus the online community should not be viewed as "merely a sales channel". Instead it should be viewed as a network for establishing interpersonal communications with the consumer.
Growth cycle
Most online communities grow slowly at first, due in part to the fact that the strength of motivation for contributing is usually proportional to the size of the community. As the size of the potential audience increases, so does the attraction of writing and contributing. This, coupled with the fact that organizational culture does not change overnight, means creators can expect slow progress at first with a new virtual community. As more people begin to participate, however, the aforementioned motivations will increase, creating a virtuous cycle in which more participation begets more participation.
Community adoption can be forecast with the Bass diffusion model, originally conceived by Frank Bass to describe the process by which new products get adopted as an interaction between innovative early adopters and those who follow them.
Online learning community
Online learning is a form of online community. The sites are designed to educate. Colleges and universities may offer many of their classes online to their students; this allows each student to take the class at his or her own pace.
According to an article published in volume 21, issue 5 of the European Management Journal titled "Learning in Online Forums", researchers conducted a series of studies about online learning. They found that while good online learning is difficult to plan, it is quite conducive to educational learning. Online learning can bring together a diverse group of people, and although it is asynchronous learning, if the forum is set up using all the best tools and strategies, it can be very effective.
Another study was published in volume 55, issue 1 of Computers and Education and found results supporting the findings of the article mentioned above. The researchers found that motivation, enjoyment, and team contributions on learning outcomes enhanced students learning and that the students felt they learned well with it. A study published in the same journal looks at how social networking can foster individual well-being and develop skills which can improve the learning experience.
These articles look at a variety of different types of online learning. They suggest that online learning can be quite productive and educational if created and maintained properly.
One feature of online communities is that they are not constrained by time thereby giving members the ability to move through periods of high to low activity over a period of time. This dynamic nature maintains a freshness and variety that traditional methods of learning might not have been able to provide.
It appears that online communities such as Wikipedia have become a source of professional learning. They are an active learning environment in which learners converse and inquire.
In a study exclusive to teachers in online communities, results showed that membership in online communities provided teachers with a rich source of professional learning that satisfied each member of the community.
Saurabh Tyagi describes benefits of online community learning which include:
No physical boundaries: Online communities do not limit their membership nor exclude based on where one lives.
Supports in-class learning: Due to time constraints, discussion boards are more efficient for question & answer sessions than allowing time after lectures to ask questions.
Build a social and collaborative learning experience: People are best able to learn when they engage, communicate, and collaborate with each other. Online communities create an environment where users can collaborate through social interaction and shared experiences.
Self-governance: Anyone who can access the internet is self-empowered. The immediate access to information allows users to educate themselves.
These terms are taken from Edudemic, a site about teaching and learning. The article "How to Build Effective Online Learning Communities" provides background information about online communities as well as how to incorporate learning within an online community.
Video "gaming" and online interactions
One of the greatest attractions towards online communities and the role assigned to an online community, is the sense of connection in which users are able to build among other members and associates. Thus, it is typical to reference online communities when regarding the 'gaming' universe. The online video game industry has embraced the concepts of cooperative and diverse gaming in order to provide players with a sense of community or togetherness. Video games have long been seen as a solo endeavor – as a way to escape reality and leave social interaction at the door. Yet, online community networks or talk pages have now allowed forms of connection with other users. These connections offer forms of aid in the games themselves, as well as an overall collaboration and interaction in the network space. For example, a study conducted by Pontus Strimling and Seth Frey found that players would generate their own models of fair "loot" distribution through community interaction if they felt that the model provided by the game itself was insufficient.
The popularity of competitive the online multiplayer games has now even promoted informal social interaction through the use of the recognized communities.
Problems with online gaming communities
As with other online communities, problems do arise when approaching the usages of online communities in the gaming culture, as well as those who are utilizing the spaces for their own agendas. "Gaming culture" offers individuals personal experiences, development of creativity, as well an assemblance of togetherness that potentially resembles formalized social communication techniques. On the other hand, these communities could also include toxicity, online disinhibition, and cyberbullying.
Toxicity: Toxicity in games usually takes the form of abusive or negative language or behavior.
Online disinhibition: The utilization in gaming communities to say things that normally would not have been said in an in-person scenario. Offers the individual the access to less restraint ion culturally appropriated interactions, and is typically through the form of aggressiveness. This action is also typically offered through the form of anonymity.
Dissociative anonymity
Invisibility
Power of status and authority
Cyberbullying: Cyberbullying stems from various levels of degree, but inevitably is cast as abuse and harassment in nature.
Online health community
Online health communities is one example of online communities which is heavily used by internet users. A key benefit of online health communities is providing user access to other users with similar problems or experiences which has a significant impact on the lives of their members. Through people participation, online health communities will be able to offer patients opportunities for emotional support and also will provide them access to experience-based information about particular problem or possible treatment strategies. Even in some studies, it is shown that users find experienced-based information more relevant than information which was prescribed by professionals. Moreover, allowing patients to collaborate anonymously in some of online health communities suggests users a non-judgmental environment to share their problems, knowledge, and experiences. However, recent research has indicated that socioeconomic differences between patients may result in feelings of alienation or exclusion within these communities, even despite attempts to make the environments inclusive.
Problems
Online communities are relatively new and unexplored areas. They promote a whole new community that prior to the Internet was not available. Although they can promote a vast array of positive qualities, such as relationships without regard to race, religion, gender, or geography, they can also lead to multiple problems.
The theory of risk perception, an uncertainty in participating in an online community, is quite common, particularly when in the following online circumstances:
performances,
financial,
opportunity/time,
safety,
social,
psychological loss.
Clay Shirky explains one of these problems like two hoola-hoops. With the emersion of online communities there is a "real life" hoola-hoop and the other and "online life". These two hoops used to be completely separate but now they have swung together and overlap. The problem with this overlap is that there is no distinction anymore between face-to-face interactions and virtual ones; they are one and the same. Shirky illustrates this by explaining a meeting. A group of people will sit in a meeting but they will all be connected into a virtual world also, using online communities such as wiki.
A further problem is identity formation with the ambiguous real-virtual life mix. Identity formation in the real world consisted of "one body, one identity", but the online communities allow you to create "as many electronic personae" as you please. This can lead to identity deception. Claiming to be someone you are not can be problematic with other online community users and for yourself. Creating a false identity can cause confusion and ambivalence about which identity is true.
A lack of trust regarding personal or professional information is problematic with questions of identity or information reciprocity. Often, if information is given to another user of an online community, one expects equal information shared back. However, this may not be the case or the other user may use the information given in harmful ways. The construction of an individual's identity within an online community requires self-presentation. Self-presentation is the act of "writing the self into being", in which a person's identity is formed by what that person says, does, or shows. This also poses a potential problem as such self-representation is open for interpretation as well as misinterpretation. While an individual's online identity can be entirely constructed with a few of his/her own sentences, perceptions of this identity can be entirely misguided and incorrect.
Online communities present the problems of preoccupation, distraction, detachment, and desensitization to an individual, although online support groups exist now. Online communities do present potential risks, and users must remember to be careful and remember that just because an online community feels safe does not mean it necessarily is.
Trolling and harassment
Cyber bullying, the "use of long-term aggressive, intentional, repetitive acts by one or more individuals, using electronic means, against an almost powerless victim" which has increased in frequency alongside the continued growth of web communities with an Open University study finding 38% of young people had experienced or witnessed cyber bullying. It has received significant media attention due to high-profile incidents such as the death of Amanda Todd who before her death detailed her ordeal on YouTube.
A key feature of such bullying is that it allows victims to be harassed at all times, something not possible typically with physical bullying. This has forced Governments and other organisations to change their typical approach to bullying with the UK Department for Education now issuing advice to schools on how to deal with cyber bullying cases.
The most common problem with online communities tend to be online harassment, meaning threatening or offensive content aimed at known friends or strangers through ways of online technology. Where such posting is done "for the lulz" (that is, for the fun of it), then it is known as trolling. Sometimes trolling is done in order to harm others for the gratification of the person posting. The primary motivation for such posters, known in character theory as "snerts", is the sense of power and exposure it gives them.
Online harassment tends to affect adolescents the most due to their risk-taking behavior and decision-making processes. One notable example is that of Natasha MacBryde who was tormented by Sean Duffy, who was later prosecuted. In 2010, Alexis Pilkington, a 17-year-old New Yorker committed suicide. Trolls pounced on her tribute page posting insensitive and hurtful images of nooses and other suicidal symbolism. Four years prior to that an 18-year-old died in a car crash in California. Trolls took images of her disfigured body they found on the internet and used them to torture the girl's grieving parents.
Psychological research has shown that anonymity increases unethical behavior through what is called the online disinhibition effect. Many website and online communities have attempted to combat trolling. There has not been a single effective method to discourage anonymity, and arguments exist claiming that removing Internet users' anonymity is an intrusion of their privacy and violates their right to free speech. Julie Zhou, writing for the New York Times, comments that "There's no way to truly rid the Internet of anonymity. After all, names and email addresses can be faked. And in any case many commenters write things that are rude or inflammatory under their real names". Thus, some trolls do not even bother to hide their actions and take pride in their behavior. The rate of reported online harassment has been increasing as there has been a 50% increase in accounts of youth online harassment from the years 2000–2005.
Another form of harassment prevalent online is called flaming. According to a study conducted by Peter J. Moor, flaming is defined as displaying hostility by insulting, swearing or using otherwise offensive language. Flaming can be done in either a group style format (the comments section on YouTube) or in a one-on-one format (private messaging on Facebook). Several studies have shown that flaming is more apparent in computer mediated conversation than in face to face interaction. For example, a study conducted by Kiesler et al. found that people who met online judged each other more harshly than those who met face to face. The study goes on to say that the people who communicated by computer "felt and acted as though the setting was more impersonal, and their behavior was more uninhibited. These findings suggest that computer-mediated communication ... elicits asocial or unregulated behavior".
Unregulated communities are established when online users communicate on a site although there are no mutual terms of usage. There is no regulator. Online interest groups or anonymous blogs are examples of unregulated communities.
Cyberbullying is also prominent online. Cyberbullying is defined as willful and repeated harm inflicted towards another through information technology mediums. Cyberbullying victimization has ascended to the forefront of the public agenda after a number of news stories came out on the topic. For example, Rutgers freshman Tyler Clementi committed suicide in 2010 after his roommate secretly filmed him in an intimate encounter and then streamed the video over the Internet. Numerous states, such as New Jersey, have created and passed laws that do not allow any sort of harassment on, near, or off school grounds that disrupts or interferes with the operation of the school or the rights of other students. In general, sexual and gender-based harassment online has been deemed a significant problem.
Trolling and cyber bullying in online communities are very difficult to stop for several reasons:
Community members do not wish to violate libertarian ideologies that state everyone has the right to speak.
The distributed nature of online communities make it difficult for members to come to an agreement.
Deciding who should moderate and how create difficulty of community management.
An online community is a group of people with common interests who use the Internet (web sites, email, instant messaging, etc.) to communicate, work together and pursue their interests over time.
Hazing
A lesser known problem is hazing within online communities. Members of an elite online community use hazing to display their power, produce inequality, and instill loyalty into newcomers. While online hazing does not inflict physical duress, "the status values of domination and subordination are just as effectively transmitted". Elite members of the in-group may haze by employing derogatory terms to refer to newcomers, using deception or playing mind games, or participating in intimidation, among other activities.
"[T]hrough hazing, established members tell newcomers that they must be able to tolerate a certain level of aggressiveness, grossness, and obnoxiousness in order to fit in and be accepted by the BlueSky community".
Privacy
Online communities like social networking websites have a very unclear distinction between private and public information. For most social networks, users have to give personal information to add to their profiles. Usually, users can control what type of information other people in the online community can access based on the users familiarity with the people or the users level of comfort. These limitations are known as "privacy settings". Privacy settings bring up the question of how privacy settings and terms of service affect the expectation of privacy in social media. After all, the purpose of an online community is to share a common space with one another. Furthermore, it is hard to take legal action when a user feels that his or her privacy has been invaded because he or she technically knew what the online community entailed. Creator of the social networking site Facebook, Mark Zuckerberg, noticed a change in users' behavior from when he first initiated Facebook. It seemed that "society's willingness to share has created an environment where privacy concerns are less important to users of social networks today than they were when social networking began". However even though a user might keep his or her personal information private, his or her activity is open to the whole web to access. When a user posts information to a site or comments or responds to information posted by others, social networking sites create a tracking record of the user's activity. Platforms such as Google and Facebook collect massive amounts of this user data through their surveillance infrastructures.
Internet privacy relates to the transmission and storage of a person's data and their right to anonymity whilst online with the UN in 2013 adopting online privacy as a human right by a unanimous vote. Many websites allow users to sign up with a username which need not be their actual name which allows a level of anonymity, in some cases such as the infamous imageboard 4chan users of the site do not need an account to engage with discussions. However, in these cases depending on the detail of information about a person posted it can still be possible to work out a users identity.
Even when a person takes measures to protect their anonymity and privacy revelations by Edward Snowden a former contractor at the Central Intelligence Agency about mass surveillance programs conducted by the US intelligence services involving the mass collection of data on both domestic and international users of popular websites including Facebook and YouTube as well as the collection of information straight from fiber cables without consent appear to show individuals privacy is not always respected. Facebook founder Mark Zuckerberg publicly stated that the company had not been informed of any such programs and only handed over individual users data when required by law implying that if the allegations are true that the data harvested had been done so without the company's consent.
The growing popularity of social networks where a user using their real name is the norm also brings a new challenge with one survey of 2,303 managers finding 37% investigated candidates social media activity during the hiring process with a study showing 1 in 10 job application rejections for those aged 16 to 34 could be due to social media checks.
Reliability of information
Web communities can be an easy and useful tool to access information. However, the information contained as well as the users' credentials cannot always be trusted, with the internet giving a relatively anonymous medium for some to fraudulently claim anything from their qualifications or where they live to, in rare cases, pretending to be a specific person. Malicious fake accounts created with the aim of defrauding victims out of money has become more high-profile with four men sentenced to between 8 years and 46 weeks for defrauding 12 women out of £250,000 using fake accounts on a dating website. In relation to accuracy one survey based on Wikipedia that evaluated 50 articles found that 24% contained inaccuracies, while in most cases the consequence might just be the spread of misinformation in areas such as health the consequences can be far more damaging leading to the U.S. Food and Drug Administration providing help on evaluating health information on the web.
Imbalance
The 1% rule states that within an online community as a rule of thumb only 1% of users actively contribute to creating content. Other variations also exist such as the 1-9-90 rule (1% post and create; 9% share, like, comment; 90% view-only) when taking editing into account. This raises problems for online communities with most users only interested in the information such a community might contain rather than having an interest in actively contributing which can lead to staleness in information and community decline. This has led such communities which rely on user editing of content to promote users into becoming active contributors as well as retention of such existing members through projects such as the Wikimedia Account Creation Improvement Project.
Legal issues
In the US, two of the most important laws dealing with legal issues of online communities, especially social networking sites are Section 512c of the Digital Millennium Copyright Act and Section 230 of the Communications Decency Act.
Section 512c removes liability for copyright infringement from sites that let users post content, so long as there is a way by which the copyright owner can request the removal of infringing content. The website may not receive any financial benefit from the infringing activities.
Section 230 of the Communications Decency Act gives protection from any liability as a result from the publication provided by another party. Common issues include defamation, but many courts have expanded it to include other claims as well.
Online communities of various kinds (social networking sites, blogs, media sharing sites, etc.) are posing new challenges for all levels of law enforcement in combating many kinds of crimes including harassment, identity theft, copyright infringement, etc.
Copyright law is being challenged and debated with the shift in how individuals now disseminate their intellectual property. Individuals come together via online communities in collaborative efforts to create. Many describe current copyright law as being ill-equipped to manage the interests of individuals or groups involved in these collaborative efforts. Some say that these laws may even discourage this kind of production.
Laws governing online behavior pose another challenge to lawmakers in that they must work to enact laws that protect the public without infringing upon their rights to free speech. Perhaps the most talked about issue of this sort is that of cyberbullying. Some scholars call for collaborative efforts between parents, schools, lawmakers, and law enforcement to curtail cyberbullying.
Laws must continually adapt to the ever-changing landscape of social media in all its forms; some legal scholars contend that lawmakers need to take an interdisciplinary approach to creating effective policy whether it is regulatory, for public safety, or otherwise. Experts in the social sciences can shed light on new trends that emerge in the usage of social media by different segments of society (including youths). Armed with this data, lawmakers can write and pass legislation that protect and empower various online community members.
See also
Clan (computer gaming)
Commons-based peer production
Digital altruism
Immersion (virtual reality)
Internet activism
Internet influences on communities
Internet trolling
Learner generated context
Mass collaboration
Network of practice
Online community manager
Online deliberation
Online ethnography
Online research community
Professional network service
Social media
Social web
Support groups
Tribe (internet)
Video game culture
References
Shuie, Yih-Chearng. "Exploring and Mitigating Social Loafing in Online Communities". Computers and Behavior. v.26.4, July 2010. pp. 768–777
Matzat, Uwe. "Reducing Problems of Sociability in Online Communities: Integrating Online Communication with Offline Interaction". American Behavioral Scientist. 2010. pp. 1170–1193
Lwin, May O. "Stop Bugging Me: An Examination of Adolescents' Protection Behavior Against Online Harassment" Journal of Adolescence. 2011. pp. 1–11
Further reading
Barzilai, G. (2003). Communities and Law: Politics and Cultures of Legal Identities. Ann Arbor: The University of Michigan Press.
Else, Liz & Turkle, Sherry. "Living online: I'll have to ask my friends", New Scientist, issue 2569, 20 September 2006. (interview)
Hafner, K. 2001. The WELL: A Story of Love, Death and Real Life in the Seminal Online Community Carroll & Graf Publishers
Gurak, Laura J. 1997. Persuasion and Privacy in Cyberspace: the Online Protests over Lotus Marketplace and the Clipper Chip. New Haven: Yale University Press.
Hagel, J. & Armstrong, A. (1997). Net Gain: Expanding Markets through Virtual Communities. Boston: Harvard Business School Press
Kim, A.J. (2000). Community Building on the Web: Secret Strategies for Successful Online Communities. London: Addison Wesley
Leimeister, J. M.; Sidiras, P.; Krcmar, H. (2006): Exploring Success Factors of Virtual Communities: The Perspectives of Members and Operators. In: Journal of Organizational Computing & Electronic Commerce (JoCEC), 16 (3&4), 277–298
Preece, J. (2000). Online Communities: Supporting Sociability, Designing Usability. Chichester: John Wiley & Sons Ltd.
Davis Powell, Connie. "Iou Already Have Zero Privacy. Getoverit!"1WouldWarrenand Brandeis Argue for Privacy for Social Networking?" Pace Law Review 31.1 (2011): 146–81.
Salkin, Patricia E. "Social Networking and Land Use Planning and Regulation: Practical Benefits, Pitfalls, and Ethical Considerations." Pace Law Review 31.1 (2011): 54–94.
Wilson, Samuel M.; Peterson, Leighton C. (2002). "The Anthropology of Online Communities". Annual Review of Anthropology 31(1): 449–467.
Virtual reality
Community building
Social information processing | 0.765774 | 0.992152 | 0.759764 |
Developmental systems theory | Developmental systems theory (DST) is an overarching theoretical perspective on biological development, heredity, and evolution. It emphasizes the shared contributions of genes, environment, and epigenetic factors on developmental processes. DST, unlike conventional scientific theories, is not directly used to help make predictions for testing experimental results; instead, it is seen as a collection of philosophical, psychological, and scientific models of development and evolution. As a whole, these models argue the inadequacy of the modern evolutionary synthesis on the roles of genes and natural selection as the principal explanation of living structures. Developmental systems theory embraces a large range of positions that expand biological explanations of organismal development and hold modern evolutionary theory as a misconception of the nature of living processes.
Overview
All versions of developmental systems theory espouse the view that:
All biological processes (including both evolution and development) operate by continually assembling new structures.
Each such structure transcends the structures from which it arose and has its own systematic characteristics, information, functions and laws.
Conversely, each such structure is ultimately irreducible to any lower (or higher) level of structure, and can be described and explained only on its own terms.
Furthermore, the major processes through which life as a whole operates, including evolution, heredity and the development of particular organisms, can only be accounted for by incorporating many more layers of structure and process than the conventional concepts of ‘gene’ and ‘environment’ normally allow for.
In other words, although it does not claim that all structures are equal, development systems theory is fundamentally opposed to reductionism of all kinds. In short, developmental systems theory intends to formulate a perspective which does not presume the causal (or ontological) priority of any particular entity and thereby maintains an explanatory openness on all empirical fronts. For example, there is vigorous resistance to the widespread assumptions that one can legitimately speak of genes ‘for’ specific phenotypic characters or that adaptation consists of evolution ‘shaping’ the more or less passive species, as opposed to adaptation consisting of organisms actively selecting, defining, shaping and often creating their niches.
Developmental systems theory: Topics
Six Themes of DST
Joint Determination by Multiple Causes: Development is a product of multiple interacting sources.
Context Sensitivity and Contingency: Development depends on the current state of the organism.
Extended Inheritance: An organism inherits resources from the environment in addition to genes.
Development as a process of construction: The organism helps shape its own environment, such as the way a beaver builds a dam to raise the water level to build a lodge.
Distributed Control: Idea that no single source of influence has central control over an organism's development.
Evolution As Construction: The evolution of an entire developmental system, including whole ecosystems of which given organisms are parts, not just the changes of a particular being or population.
A computing metaphor
To adopt a computing metaphor, the reductionists (whom developmental systems theory opposes) assume that causal factors can be divided into ‘processes’ and ‘data’, as in the Harvard computer architecture. Data (inputs, resources, content, and so on) is required by all processes, and must often fall within certain limits if the process in question is to have its ‘normal’ outcome. However, the data alone is helpless to create this outcome, while the process may be ‘satisfied’ with a considerable range of alternative data.
Developmental systems theory, by contrast, assumes that the process/data distinction is at best misleading and at worst completely false, and that while it may be helpful for very specific pragmatic or theoretical reasons to treat a structure now as a process and now as a datum, there is always a risk (to which reductionists routinely succumb) that this methodological convenience will be promoted into an ontological conclusion. In fact, for the proponents of DST, either all structures are both process and data, depending on context, or even more radically, no structure is either.
Fundamental asymmetry
For reductionists there is a fundamental asymmetry between different causal factors, whereas for DST such asymmetries can only be justified by specific purposes, and argue that many of the (generally unspoken) purposes to which such (generally exaggerated) asymmetries have been put are scientifically illegitimate. Thus, for developmental systems theory, many of the most widely applied, asymmetric and entirely legitimate distinctions biologists draw (between, say, genetic factors that create potential and environmental factors that select outcomes or genetic factors of determination and environmental factors of realisation) obtain their legitimacy from the conceptual clarity and specificity with which they are applied, not from their having tapped a profound and irreducible ontological truth about biological causation. One problem might be solved by reversing the direction of causation correctly identified in another. This parity of treatment is especially important when comparing the evolutionary and developmental explanations for one and the same character of an organism.
DST approach
One upshot of this approach is that developmental systems theory also argues that what is inherited from generation to generation is a good deal more than simply genes (or even the other items, such as the fertilised zygote, that are also sometimes conceded). As a result, much of the conceptual framework that justifies ‘selfish gene’ models is regarded by developmental systems theory as not merely weak but actually false. Not only are major elements of the environment built and inherited as materially as any gene but active modifications to the environment by the organism (for example, a termite mound or a beaver’s dam) demonstrably become major environmental factors to which future adaptation is addressed. Thus, once termites have begun to build their monumental nests, it is the demands of living in those very nests to which future generations of termite must adapt.
This inheritance may take many forms and operate on many scales, with a multiplicity of systems of inheritance complementing the genes. From position and maternal effects on gene expression to epigenetic inheritance to the active construction and intergenerational transmission of enduring niches, development systems theory argues that not only inheritance but evolution as a whole can be understood only by taking into account a far wider range of ‘reproducers’ or ‘inheritance systems’ – genetic, epigenetic, behavioural and symbolic – than neo-Darwinism’s ‘atomic’ genes and gene-like ‘replicators’. DST regards every level of biological structure as susceptible to influence from all the structures by which they are surrounded, be it from above, below, or any other direction – a proposition that throws into question some of (popular and professional) biology’s most central and celebrated claims, not least the ‘central dogma’ of Mendelian genetics, any direct determination of phenotype by genotype, and the very notion that any aspect of biological (or psychological, or any other higher form) activity or experience is capable of direct or exhaustive genetic or evolutionary ‘explanation’.
Developmental systems theory is plainly radically incompatible with both neo-Darwinism and information processing theory. Whereas neo-Darwinism defines evolution in terms of changes in gene distribution, the possibility that an evolutionarily significant change may arise and be sustained without any directly corresponding change in gene frequencies is an elementary assumption of developmental systems theory, just as neo-Darwinism’s ‘explanation’ of phenomena in terms of reproductive fitness is regarded as fundamentally shallow. Even the widespread mechanistic equation of ‘gene’ with a specific DNA sequence has been thrown into question, as have the analogous interpretations of evolution and adaptation.
Likewise, the wholly generic, functional and anti-developmental models offered by information processing theory are comprehensively challenged by DST’s evidence that nothing is explained without an explicit structural and developmental analysis on the appropriate levels. As a result, what qualifies as ‘information’ depends wholly on the content and context out of which that information arises, within which it is translated and to which it is applied.
Criticism
Philosopher Neven Sesardić, while not dismissive of developmental systems theory, argues that its proponents forget that the role between levels of interaction is ultimately an empirical issue, which cannot be settled by a priori speculation; Sesardić observes that while the emergence of lung cancer is a highly complicated process involving the combined action of many factors and interactions, it is not unreasonable to believe that smoking has an effect on developing lung cancer. Therefore, though developmental processes are highly interactive, context dependent, and extremely complex, it is incorrect to conclude main effects of heredity and environment are unlikely to be found in the "messiness". Sesardić argues that the idea that changing the effect of one factor always depends on what is happening in other factors is an empirical claim, as well as a false one; for example, the bacterium Bacillus thuringiensis produces a protein that is toxic to caterpillars. Genes from this bacterium have been placed into plants vulnerable to caterpillars and the insects proceed to die when they eat part of the plant, as they consume the toxic protein. Thus, developmental approaches must be assessed on a case by case basis and in Sesardić's view, DST does not offer much if only posed in general terms. Hereditarian Psychologist Linda Gottfredson differentiates the "fallacy of so–called "interactionism"" from the technical use of gene-environment interaction to denote a non–additive environmental effect conditioned upon genotype. “Interactionism's” over–generalization cannot render attempts to identify genetic and environmental contributions meaningless. Where behavioural genetics attempts to determine portions of variation accounted for by genetics, environmental–developmentalistics like DST attempt to determine the typical course of human development and erroneously conclude the common theme is readily changed.
Another Sesardić argument counters another DST claim of impossibility of determining contribution of trait influence (genetic vs. environment). It necessarily follows a trait cannot be causally attributed to environment as genes and environment are inseparable in DST. Yet DST, critical of genetic heritability, advocates developmentalist research of environmental effects, a logical inconsistency. Barnes et al., made similar criticisms observing that the innate human capacity for language (deeply genetic) does not determine the specific language spoken (a contextually environmental effect). It is then, in principle, possible to separate the effects of genes and environment. Similarly, Steven Pinker argues if genes and environment couldn't actually be separated then speakers have a deterministic genetic disposition to learn a specific native language upon exposure. Though seemingly consistent with the idea of gene–environment interaction, Pinker argues it is nonetheless an absurd position since empirical evidence shows ancestry has no effect on language acquisition — environmental effects are often separable from genetic ones.
Related theories
Developmental systems theory is not a narrowly defined collection of ideas, and the boundaries with neighbouring models are porous. Notable related ideas (with key texts) include:
The Baldwin effect
Evolutionary developmental biology
Neural Darwinism
Probabilistic epigenesis
Relational developmental systems
See also
Systems theory
Complex adaptive system
Developmental psychobiology
The Dialectical Biologist - a 1985 book by Richard Levins and Richard Lewontin which describe a related approach.
Living systems
References
Bibliography
Reprinted as:
Dawkins, R. (1976). The Selfish Gene. New York: Oxford University Press.
Dawkins, R. (1982). The Extended Phenotype. Oxford: Oxford University Press.
Oyama, S. (1985). The Ontogeny of Information: Developmental Systems and Evolution. Durham, N.C.: Duke University Press.
Edelman, G.M. (1987). Neural Darwinism: Theory of Neuronal Group Selection. New York: Basic Books.
Edelman, G.M. and Tononi, G. (2001). Consciousness. How Mind Becomes Imagination. London: Penguin.
Goodwin, B.C. (1995). How the Leopard Changed its Spots. London: Orion.
Goodwin, B.C. and Saunders, P. (1992). Theoretical Biology. Epigenetic and Evolutionary Order from Complex Systems. Baltimore: Johns Hopkins University Press.
Jablonka, E., and Lamb, M.J. (1995). Epigenetic Inheritance and Evolution. The Lamarckian Dimension. London: Oxford University Press.
Kauffman, S.A. (1993). The Origins of Order: Self-Organization and Selection in Evolution. Oxford: Oxford University Press.
Levins, R. and Lewontin, R. (1985). The Dialectical Biologist. London: Harvard University Press.
Neumann-Held, E.M. (1999). The gene is dead- long live the gene. Conceptualizing genes the constructionist way. In P. Koslowski (ed.). Sociobiology and Bioeconomics: The Theory of Evolution in Economic and Biological Thinking, pp. 105–137. Berlin: Springer.
Waddington, C.H. (1957). The Strategy of the Genes. London: Allen and Unwin.
Further reading
Depew, D.J. and Weber, B.H. (1995). Darwinism Evolving. System Dynamics and the Genealogy of Natural Selection. Cambridge, Massachusetts: MIT Press.
Eigen, M. (1992). Steps Towards Life. Oxford: Oxford University Press.
Gray, R.D. (2000). Selfish genes or developmental systems? In Singh, R.S., Krimbas, C.B., Paul, D.B., and Beatty, J. (2000). Thinking about Evolution: Historical, Philosophical, and Political Perspectives. Cambridge University Press: Cambridge. (184-207).
Koestler, A., and Smythies, J.R. (1969). Beyond Reductionism. London: Hutchinson.
Lehrman, D.S. (1953). A critique of Konrad Lorenz’s theory of instinctive behaviour. Quarterly Review of Biology 28: 337-363.
Thelen, E. and Smith, L.B. (1994). A Dynamic Systems Approach to the Development of Cognition and Action. Cambridge, Massachusetts: MIT Press.
External links
William Bechtel, Developmental Systems Theory and Beyond presentation, winter 2006.
Biological systems
Systems theory
Evolutionary biology | 0.788926 | 0.963026 | 0.759756 |
Lived experience | In qualitative phenomenological research, lived experience refers to the first-hand involvement or direct experiences and choices of a given person, and the knowledge that they gain from it, as opposed to the knowledge a given person gains from second-hand or mediated source. It is a category of qualitative research together with those that focus on society and culture and those that focus on language and communication.
In the philosophy of Wilhelm Dilthey, the human sciences are based on lived experience, which makes them fundamentally different from the natural sciences, which are considered to be based on scientific experiences. The concept can also be approached from the view that since every experience has both objective and subjective components, it is important for a researcher to understand all aspects of it.
In phenomenological research, lived experiences are the main object of study, but the goal of such research is not to understand individuals' lived experiences as facts, but to determine the understandable meaning of such experiences. In addition, lived experience is not about reflecting on an experience while living through it but is recollective, with a given experience being reflected on after it has passed or been lived through.
The term dates back to the 19th century, but its use has increased greatly in recent decades.
See also
References
External links
Phenomenological methodology
Qualitative research | 0.770156 | 0.986471 | 0.759737 |
Post-traumatic growth | In psychology, posttraumatic growth (PTG) is positive psychological change experienced as a result of struggling with highly challenging, highly stressful life circumstances. These circumstances represent significant challenges to the adaptive resources of the individual, and pose significant challenges to the individual's way of understanding the world and their place in it. Posttraumatic growth involves "life-changing" psychological shifts in thinking and relating to the world and the self, that contribute to a personal process of change, that is deeply meaningful.
People who have experienced post-traumatic growth often report changes within the following five factors: appreciation of life; relating to others; personal strength; new possibilities; and spiritual, existential or philosophical change.
Global Context & History
The general understanding that suffering and distress can potentially yield positive change is thousands of years old. For example, some of the early ideas and writing of the ancient Hebrews, Greeks, and early Christians, as well as some of the teachings of Hinduism, Buddhism, Islam and the Baháʼí Faith contain elements of the potentially transformative power of suffering. Attempts to understand and discover the meaning of human suffering represent a central theme of much philosophical inquiry and appear in the works of novelists, dramatists and poets.
Traditional psychology's equivalent to thriving is resilience, which is reaching the previous level of functioning before a trauma, stressor, or challenge. The difference between resilience and thriving is the recovery point – thriving goes above and beyond resilience, and involves finding benefits within challenges.
The term "posttraumatic growth" was coined by psychologists Richard Tedeschi and Lawrence Calhoun at the University of North Carolina at Charlotte. According to Tedeschi, as many as 89% of survivors report at least one aspect of posttraumatic growth, such as a renewed appreciation for life.
Variants of the idea have included Crystal Park's proposed stress related growth model, which highlighted the derived sense of meaning in the context of adjusting to challenging and stressful situations, and Joseph and Linley's proposed adversarial growth model, which linked growth with psychological wellbeing. According to the adversarial growth model, whenever an individual is experiencing a challenging situation, they can either integrate the traumatic experience into their current belief system and worldviews or they can modify their beliefs based on their current experiences. If the individual positively accommodates the trauma-related information and assimilates prior beliefs, psychological growth can occur following adversity.
The Development of Post-Traumatic Growth
The Relationship Between Trauma, PTG, and Other Outcomes
Psychological trauma is an emotional response caused by severe distressing events that are outside the normal range of human experiences. While the idea that positive change may occur following trauma may seem paradoxical, it is common and well documented. However, not everyone who experiences a traumatic event will necessarily develop post-traumatic growth. This is because growth does not occur as a direct result of trauma; rather, it is the individual's struggle with the new reality in the aftermath of trauma that is crucial in determining the extent to which post-traumatic growth occurs.
While PTG often leads individuals to live in ways that are fulfilling and meaningful, the presence of PTG and distress are not mutually exclusive. Experiencing trauma is typically associated with distress and loss, and PTG does not change this. PTG and negative trauma related outcomes (e.g. PTSD) often coexist. Encouragingly, reports of growth experiences in the aftermath of traumatic events far outnumber reports of psychiatric disorders.
Creating Post Traumatic Growth
Posttraumatic growth occurs with the attempts to adapt to highly negative sets of circumstances that can engender high levels of psychological distress such as major life crises, which typically engender unpleasant psychological reactions. Such experiences often alter or renew one's core relationships or concepts, leading to PTG.
A Model of PTG
Calhoun and Tedeschi (2006) outline their updated model of posttraumatic growth in Handbook of Post-traumatic Growth: Research and Practice. Most importantly, this model includes:
Characteristics of the Person and of the Challenging Circumstances
Management of Emotional Distress
Rumination
Self-Disclosure
Sociocultural Influences
Narrative Development
Life Wisdom
Promotive Factors
Various factors have been identified as associated the development of PTG. In 2011 Iversen and Christiansen and Elklit suggested that predictors of growth have different effects on PTG on micro-, meso-, and macro level, and a positive predictor of growth on one level can be a negative predictor of growth on another level. This might explain some of the inconsistent research results within the area.
Trauma Types: Characteristics of the traumatic event may contribute to the development or inhibition of PTG. For example, For PTG to come about, the severity of the traumatic experience must be enough to threaten one's preexisting understanding the world or their personal narrative. However, extremely severe trauma exposure may overwhelm one's ability to comprehend and grow from the experience. Experiencing Multiple Sources of Trauma is also considered promotive of PTG. While gender roles did not reliably predict PTG, they are indicative of the type of trauma that an individual experiences. Women tend to experience victimization on a more individual and interpersonal level (e.g. sexual victimization) while men tend to experience more systemic and collective traumas (e.g. military and combat). Given that group dynamics appear to play a predictive role in post-traumatic growth, it can be argued that the type of exposure may indirectly predict growth in men (Lilly 2012).
Responding to the Traumatic Experience: The different ways in which a person may process or engage after a traumatic experience may influence whether PTG comes about. The presence of rumination, sharing negative emotions, positive coping strategies (e.g. spirituality), event centrality, resilience, and growth actions are associated with increased PTG.
Many individuals ruminate extensively about a traumatic experience after it has occurred. In this context, rumination is not necessarily negative and can mean the same thing as cognitive engagement. When this occurs, the individual is investing mental resources into understanding and making sense of their experience. People typically participate in this way to comprehend and explain their experience (Why? How?) and to discover how their experience factors into their perceptions and plans (What does this mean? What now?). While neither is entirely bad, deliberate rather than intrusive rumination can be the most effective at producing growth.
The use of different coping strategies to adjust to a stressor may also influence the development of PTG. As Richard G. Tedeschi and other post-traumatic growth researchers have found, the ability to accept situations that cannot be changed is crucial for adapting to traumatic life events. They call it "acceptance coping", and have determined that coming to terms with reality is a significant predictor of post-traumatic growth. It is also alleged, though currently under further investigation, that opportunity for emotional disclosure can lead to post-traumatic growth though did not significantly reduce post-traumatic stress symptomology.
The Individual's Characteristics: Some personality traits have been found to be associated with increased PTG. These traits include openness, agreeableness, altruistic behaviors, extraversion, conscientiousness, sense of coherence (SOC), sense of purpose, hopefulness, and low neuroticism are associated with PTG. Despite being otherwise undesirable, narcissism is also associated with PTG. These traits may increase an individual's capacity to adapt to traumas, leading to growth.
Social Support: Social support has been found to be a mediator of PTG. Not only are high levels of pre-exposure social support associated with growth, but there is some neurobiological evidence to support the idea that support will modulate a pathological response to stress in the hypothalamic-pituitary-adrenocortical (HPA) pathway in the brain (Ozbay 2007). It also benefits a person to have supportive others that can aid in posttraumatic growth by providing a way to craft narratives about the changes that have occurred, and by offering perspectives that can be integrated into schema change. These relationships help develop narratives; narratives of trauma and survival are always important in posttraumatic growth because they forces survivors to confront questions of meaning and how answers to those questions can be reconstructed.
Religion and Spirituality: Spirituality has been shown to highly correlate with post-traumatic growth and in fact, many of the most deeply spiritual beliefs are a result of trauma exposure.
Other Variables:
Age: Post-traumatic growth has been studied in children to a lesser extent. A review by Meyerson and colleagues found various relations between social and psychological factors and posttraumatic growth in children and adolescents, but concluded that fundamental questions about its value and function remain.
Interdisciplinary Connections
Personality Psychology & PTG
Historically, personality traits have been depicted as being stable following the age of 30. Since 1994, research findings suggested that personality traits can change in response to life transition events during middle and late adulthood. Life transition events may be related to work, relationships, or health. Moderate amounts of stress were associated with improvements in the traits of mastery and toughness. Individuals experiencing moderate amounts of stress were found to be more confident about their abilities and had a better sense of control over their lives. Further, moderate amounts of stress were also associated with better resilience, which can be defined as successful recovery to baseline following stress. An individual who experienced moderate amounts of stressful events was more likely to develop coping skills, seek support from their environment, and experience more confidence in their ability to overcome adversity.
Post-traumatic growth & Personality psychology
Experiencing a traumatic event can have a transformational role in personality among certain individuals and facilitate growth. For example, individuals who have experienced trauma have been shown to exhibit greater optimism, positive affect, and satisfaction with social support, as well as increases in the number of social supportive resources. Similarly, research reveals personality changes among spouses of terminal cancer patients suggesting such traumatic life transitions facilitated increases in interpersonal orientation, prosocial behaviors, and dependability scores.
The outcome of traumatic events can be negatively impacted by factors occurring during and after the trauma, potentially increasing the risk of developing posttraumatic stress disorder, or other mental health difficulties.
Further, characteristics of the trauma and personality dynamics of the individual experiencing the trauma each independently contributed to posttraumatic growth. If the amounts of stress are too low or too overwhelming, a person cannot cope with the situation. Personality dynamics can either facilitate or impede posttraumatic growth, regardless of the impact of traumatic events.
Mixed Findings in Personality Psychology
Research of posttraumatic growth is emerging in the field of personality psychology, with mixed findings. Several researchers examined posttraumatic growth and its associations with the big five personality model. Posttraumatic growth was found to be associated with greater agreeableness, openness, and extraversion. Agreeableness relates to interpersonal behaviors which include trust, altruism, compliance, honesty, and modesty. Individuals who are agreeable are more likely to seek support when needed and to receive it from others. Higher scores on the agreeableness trait can facilitate the development of posttraumatic growth.
Individuals who score high on openness scales are more likely to be curious, open to new experiences, and emotionally responsive to their surroundings. It is hypothesized that following a traumatic event, individuals who score high on openness would more readily reconsider their beliefs and values that may have been altered. Openness to experiences is thus key for facilitating posttraumatic growth. Individuals who score high on extraversion were more likely to adopt more problem-solving strategies, cognitive restructuring, and seek more support from others. Individuals who score high on extraversion use coping strategies that enable posttraumatic growth. Research among veterans and among children of prisoners of war suggested that openness and extraversion contributed to posttraumatic growth.
Research among community samples suggested that openness, agreeableness, and conscientiousness contributed to posttraumatic growth. Individuals who score high on conscientiousness tend to be better at self-regulating their internal experience, have better impulse control, and are more likely to seek achievements across various domains. The conscientiousness trait has been associated with better problem-solving and cognitive restructuring. As such, individuals who are conscientious are more likely to better adjust to stressors and exhibit posttraumatic growth.
Other research among bereaved caregivers and among undergraduates indicated that posttraumatic growth was associated with extraversion, agreeableness, and conscientiousness. As such, the findings linking the big five personality traits with posttraumatic growth are mixed.
Personality Dynamics & Trauma Types
Recent research is examining the influence of trauma types and personality dynamics on posttraumatic growth. Individuals who aspire to standards and orderliness are more likely to develop posttraumatic growth and better overall mental health. It is hypothesized that such individuals can better process the meaning of hardships as they experience moderate amounts of stress. This tendency can facilitate positive personal growth. On the other hand, it was found that individuals who have trouble in regulating themselves are less likely to develop posttraumatic growth and more likely to develop trauma-spectrum disorders and mood disorders. This is in line with past research that suggested that individuals who scored higher on self-discrepancy were more likely to score higher on neuroticism and exhibit poor coping. Neuroticism relates to an individual's tendency to respond with negative emotions to threat, frustration, or loss. As such, individuals with high neuroticism and self-discrepancy are less likely to develop posttraumatic growth. Research has highlighted the important role that collective processing of emotional experiences has on posttraumatic growth. Those who are more capable of engaging with their emotional experiences due to crisis and trauma, and make meaning of these are more likely to increase in their resilience and community engagement following the disaster. Furthermore, collective processing of these emotional experiences leads to greater individual growth and collective solidarity and belongingness.
Personality Characteristics
Two personality characteristics that may affect the likelihood that people can make positive use of the aftermath of traumatic events that befall them include extraversion and openness to experience. Also, optimists may be better able to focus attention and resources on the most important matters, and disengage from uncontrollable or unsolvable problems. The ability to grieve and gradually accept trauma could also increase the likelihood of growth.
Individual differences in coping strategies set some people on a maladaptive spiral, whereas others proceed on an adaptive spiral. With this in mind, some early success in coping could be a precursor to posttraumatic growth. A person's level of confidence could also play a role in her or his ability to persist into growth or, out of lack of confidence, give up.
Positive Psychology & PTG
Posttraumatic growth can be seen as a form of positive psychology. In the 1990s, the field of psychology began a movement towards understanding positive psychological outcomes after trauma. Researchers initially referred to this phenomenon in number of different ways, "positive life changes", "growing in the aftermath of suffering", and "positive adaptation to trauma". But it was not until Tedeschi and Calhoun created the "Posttraumatic Growth Inventory (PTGI)" in 1996 in which the term posttraumatic growth (PTG) was born. Around the same time, a new area of strengths-based psychology emerged.
Positive psychology involves studying positive mental processes aimed at understanding positive psychological outcomes and "healthy" individuals. This framework was intended to serve as an answer to "mental illness" focused psychology. The core ideals of positive psychology are included, but not limited to:
Positive personality traits (optimism, subjective well-being, happiness, self-determination)
Authenticity
Finding meaning and purpose (self-actualization)
Spirituality
Healthy interpersonal relationships
Satisfaction with life
Gratitude
The concept of PTG has been described as a part of the positive psychology movement. Since PTG describes positive outcomes post trauma rather than negative outcomes, it falls under the category of positive psychological changes. Positive psychology intends to lay claim on all capacities of positive mental functioning. So, even though PTG (as a defined concept) was not initially described in the positive psychology framework, it is presently included in positive psychological theories. This is reinforced by the parallels between the core concepts of positive psychology and PTG. This is observable through comparing the 5 domains of the PTGI with the core ideals of positive psychology.
Positive Psychology & Domains of the PTGI
Positive psychological changes and outcomes are defined as a part of positive psychology. PTG is specifically the positive psychological changes post-trauma. The domains of PTG are defined as the different areas of positive psychological changes that are possible post-trauma. The PTGI, a measure designed by Tedeschi and Calhoun in 1996, measures PTG across the following areas or domains:
New Possibilities: The positive psychological changes described by the domain of "New Possibilities" are developing new interests, establishing a new path in life, doing better things with one's life, new opportunities, and an increased likelihood to change what is needed. This can be compared to the "finding meaning and purpose" core ideal of positive psychology.
Relating to Others: The positive psychological changes described by the domain "Relating to Others" are increased reliability on others in times of trouble, greater sense of closeness with others, willingness to express emotions to others, increased compassion for others, increased effort in relationships, greater appreciation of how wonderful people are, and increased acceptance about needing others. This can be compared to the "healthy interpersonal relationships" core ideal of positive psychology.
Personal Strength: The positive psychological changes described by the domain "Personal Strength" are a greater feeling of self-reliance, increased ability to handle difficulties, improved acceptance of life outcomes and new discovery of mental strength. This can be compared to the "positive personality traits (self-determination, optimism)" core ideals of positive psychology.
Spiritual Change: The positive psychological changes described by the domain "Spiritual Change" are a better understanding of spiritual matters and a stronger religious (or spiritual) faith. This can be compared to the "spirituality and authenticity" core ideal of positive psychology.
Appreciation of Life: The positive psychological changes described by the domain "Appreciation of Life" are changed priorities regarding what is important in life, a greater appreciation of the value of one's own life, and increased appreciation of each day. This can be compared to the "satisfaction with life" core ideal of positive psychology.
In 2004, Tedeschi and Calhoun released an updated framework of PTG. The overlaps between positive psychology and posttraumatic growth demonstrate an overwhelming association between these frameworks. However, Tedeschi and Calhoun note that even though these domains describe positive psychological changes post-trauma, the presence of PTG does not necessarily rule out the occurrence of any simultaneous negative post-trauma mental processes nor negative outcomes (such as psychological distress).
Positive Psychology & Clinical Applications
In a clinical setting, PTG is often included as a part of positive psychology in terms of methodology and treatment goals. Positive psychology interventions (PPI) generally include a multidimensional, therapeutic approach in which psychological tests are measurements to track progress. For clinical PPI involving recovery from trauma, there is usually at least one measure of PTG. Most trauma research and clinical intervention focuses on evaluating the negative outcomes post-trauma. But from a positive psychological perspective, a strengths-based approach might be more relevant for clinical intervention aimed at recovery. While PTG has been effectively measured in a number of relevant areas of psychology, it has been especially successful in health psychology.
In the exploration of PTG in health psychology settings (hospitals, long-term care clinics, etc.), well-being (a core ideal of positive psychology) was linked to increased PTG in patients. PTG is seen more often in health psychology settings when PPI are utilized. While the focus in health psychology settings is to foster resilience, new research indicates that health psychology practitioners, doctors, and nurses should also aim to increase positive psychological outcomes (such as PTG) as a part of their recovery goals. Resilience is also central to positive psychology and is involved with PTG. Resilience has been distinguished as a pathway to PTG, but its exact relationship is currently still being explored. That being said, they are both positive psychological processes with strong ties to positive psychology.
The use of PPI post-trauma is not only effective in increasing PTG, but it has also been shown to reduce negative posttraumatic symptoms. These reductions on posttraumatic stress symptoms and increases in PTG have been demonstrated to be long-lasting. When participants were followed up at 12 months post PPI, not only was the PTG still present, it actually increased over time. PPI targeted at reducing stress have demonstrated promising results across a large number of studies.
Conclusion
Over the last 25 years, PTG has demonstrated its place in the framework of positive psychology in theory and in practice. The theoretical framework put forth by Seligman and Csikszentmihalyi and Tedeschi & Calhoun have substantial overlap and both cite "positive psychological changes". While positive psychology speaks to a general focus on positive aspects of human psychology, PTG speaks specifically to positive psychological change after trauma. This would inherently make PTG a sub-category of positive psychology. PTG has also been referred to in the literature as perceived benefits, positive changes, stress-related growth, and adversarial growth. However, it is made clear that regardless of the terminology, it is based is positive mental changes, which is the essence of positive psychology.
Psycho-Oncology & PTG
The study of those who have experienced cancer has contributed significantly to the understanding of PTG. While more research is needed to establish the prevalence of cancer related PTG, there is mounting evidence that high rates of patients experience some form of positive growth.
Trauma Exposure in Psycho-Oncology
Individuals diagnosed with cancer may encounter a diverse range of stressors across the stages of the experience. Further, what is traumatic differs from person to person. For example, feelings of uncertainty or fear of death are common following a diagnosis. Distress may also arise from physical symptoms from the illness itself or from cancer treatments. The process of contending with cancer often brings about significant life changes such as economic strain or social role reversals. Among survivors, fear of recurrence is common. The loved ones and caregivers of patients may also experience severe stressors which may lead to PTG.
The impact of trauma on this population is evident in both negative and growth outcomes. PTSD is more common among individuals who are diagnosed with cancer than those who have not, and rates of PTSD are higher in those who experience some cancer types (e.g. brain cancer) and treatment types (e.g. chemotherapy) than in others. Cancer type also matters for PTG, as more advanced forms are more strongly associated with growth. Studying cancer patients has shed light on the relationship on the relationship between PTSD and PTG. While some studies have found a correlation between PTSD and PTG among cancer patients, others conclude that they are independent constructs.
Promotive Factors in Psycho-Oncology
There are many variables which are associated with development of PTG for oncology patients such as social support, subjective appraisal of the threat, and positive coping strategies. In cancer patients, hope, optimism, spirituality, and positive coping styles are associated with PTG outcomes.
Limited research has investigated whether psychosocial interventions can support the development of PTG. A recent meta-analysis of randomized controlled trials found that psychosocial interventions for cancer patients, especially mindfulness-based interventions, show promise in facilitating PTG. More research is needed in this area to understand how interventions can impact PTG in oncology populations.
Characterizing PTG Outcomes in Psycho-Oncology
Post-traumatic growth takes on many forms in the lives of cancer patients and survivors. For patients, PTG is often described in three categories. 1) They may identify themselves as having strengths or skills that made them competent in the difficult situation. 2) After emotional growth, they may find changes in their personal relationships such as increased closeness or appreciation. 3) Their experience may lead to a greater appreciation of life or strengthen their spirituality.
Jimmie Holland, a founder in the field of psycho-oncology, provides examples of growth following cancer in her book In The Human Side of Cancer. Holland tells the story of one patient, Jim, whose experience with PTG altered both his perspective on life and his interpersonal relationships. After undergoing radiation for cancer of the vocal cord, Jim found a new appreciation for health and used his experience to motivate his sons to never start smoking. Further, survivors of cancer often discover a new sense of compassion and find new purpose in giving back to others. After surviving osteogenic sarcoma which resulted in the amputation of her leg, Sheila Kussner began giving back by visiting other amputees in hospitals to share support. She later went on to raise millions of dollars for cancer research and establishing the Hope and Cope program at the Montreal Jewish General hospital which provides psychological support to thousands of patients. These examples may fit within the realm of PTG.
Related Theories and Constructs: Resilience, Thriving, Positive Disintegration, etc.
Resilience
In general, research in psychology shows that people are resilient overall. For example, Southwick and Charney, in a study of 250 prisoners of war from Vietnam, showed that participants developed much lower rates of depression and PTSD symptoms than expected. Donald Meichenbaum estimated that 60% of North Americans will experience trauma in their lifetime, and of these while no one is unscathed, some 70% show resilience and 30% show harmful effects. Similarly, 68 million women of the 150 million in America will be victimized over their lifetime, but a shocking 10% will suffer insofar as they must seek help from mental health professionals.
In general, traditional psychology's approach to resiliency as exhibited in the studies above is a problem-oriented one, assuming that PTSD is the problem and that resiliency just means to avoid or fix that problem in order to maintain baseline well-being. This type of approach fails to acknowledge any growth that might occur beyond the previously set baseline, however. Positive psychology's idea of thriving attempts to reconcile that failure. A meta-analysis of studies done by Shakespeare-Finch and Lurie-Beck in this area indicates that there is actually an association between PTSD symptoms and posttraumatic growth. The null hypothesis that there is no relationship between the two was rejected for the study. The correlation between the two was significant and was found to be dependent upon the nature of the event and the person's age. For example, survivors of sexual assault show less posttraumatic growth than survivors of natural disaster. Ultimately, however, the meta-analysis serves to show that PTSD and posttraumatic growth are not mutually exclusive ends of a recovery spectrum and that they may actually co-occur during a successful process to thriving.
It is important to note that while aspects of resilience and growth aid an individual's psychological well-being, they are not the same thing. Dr. Richard Tedeschi and Dr. Erika Felix specifically note that resilience suggests bouncing back and returning to one's previous state of being, whereas post-traumatic growth fosters a transformed way of being or understanding for an individual. Often, traumatic or challenging experiences force an individual to re-evaluate core beliefs, values, or behaviors on both cognitive and emotional levels; the idea of post-traumatic growth is therefore rooted in the notion that these beliefs, values, or behaviors come with a new perspective and expectation after the event. Thus, post-traumatic growth centers around the concept of change, whereas resilience suggests the return to previous beliefs, values, or lifestyles.
Thriving
To understand the significance of thriving in the human experience, it is important to understand its role within the context of trauma and its separation from traditional psychology's idea of resilience. Implicit in the idea of thriving and resilience both is the presence of adversity. O'Leary and Ickovics created a four-part diagram of the spectrum of human response to adversity, the possibilities of which include: succumbing to adversity, surviving with diminished quality of life, resiliency (returning to baseline quality of life), and thriving. Thriving includes not only resiliency, but an additional further improvement over the quality of life previous to the adverse event.
Thriving in positive psychology definitely aims to promote growth beyond survival, but it is important to note that some of the theories surrounding the causes and effects of it are more ambiguous. Literature by Carver indicates that the concept of thriving is a difficult one to define objectively. He makes the distinction between physical and psychological thriving, implying that while physical thriving has obvious measurable results, psychological thriving does not as much. This is the origin of much ambiguity surrounding the concept. Carver lists several self-reportable indicators of thriving: greater acceptance of self, change in philosophy, and a change in priorities. These are factors that generally lead a person to feel that they have grown, but obviously are difficult to measure quantitatively.
The dynamic systems approach to thriving attempts to resolve some of the ambiguity in the quantitative definition of thriving, citing thriving as an improvement in adaptability to future trauma based on their model of attractors and attractor basins. This approach suggests that reorganization of behaviors is required to make positive adaptive behavior a more significant attractor basin, which is an area the system shows a tendency toward.
In general, as pointed out by Carver, the idea of thriving seems to be one that is hard to remove from subjective experience. However, work done by Meichenbaum to create his Posttraumatic Growth Inventory helps to set forth a more measurable map of thriving. The five fields of posttraumatic growth that Meichenbaum outlined include: relating to others, new possibilities, personal strength, spiritual change, and appreciation for life. Though literature that addresses "thriving" specifically is sparse, there is much research in the five areas Meichenbaum cites as facilitating thriving, all of which supports the idea that growth after adversity is a viable and significant possibility for human well-being.
Positive disintegration
The theory of positive disintegration by Kazimierz Dąbrowski is a theory that postulates that symptoms such as psychological tension and anxiety could be signs that a person might be in positive disintegration.The theory proposes that this can happen when an individual rejects previously adopted values (relating to their physical survival and their place in society), and adopts new values that are based on the higher possible version of who they can be. Rather than seeing disintegration as a negative state, the theory proposes that is a transient state which allows an individual to grow towards their personality ideal. The theory stipulates that individuals who have a high development potential (i.e. those with overexcitabilities), have a higher chance of re-integrating at a higher level of development, after disintegration. Scholarly work is needed to ascertain whether disintegrative processes, as specified by the theory, are traumatic, and whether reaching higher integration, e.g. Level IV (directed multilevel disintegration) or V (secondary integration), can be equated to posttraumatic growth.
Aspects
Another attempt at quantitatively charting the concept of thriving is via the Posttraumatic Growth Inventory. The inventory has 21 items and is designed to measure the extent to which one experiences personal growth after adversity. The inventory includes elements from five key areas: relating to others, new possibilities, personal strength, spiritual change, and appreciation for life. These five categories are reminiscent of the subjective experiences Carver struggled to quantify in his own literature on thriving, but are imposed onto scales to maintain measurability. When considering the idea of thriving from the five-point approach, it is easier to place more research from psychology within the context of thriving. Additionally, a short form version of the Posttraumatic Growth Inventory has been created with only 10 items, selecting two questions for each of the five subscales. Studies have been conducted to better understand the validity of this scale and some have found that self-reported measures of posttraumatic growth are unreliable. Frazier et al. (2009) reported that further improvement could be made to this inventory to better capture actual change.
One of the key facets of posttraumatic growth set forth by Meichenbaum is relating to others. Accordingly, much work has been done to indicate that social support resources are extremely important to the facilitation of thriving. House, Cohen, and their colleagues indicate that perception of adequate social support is associated with improved adaptive tendency. This idea of better adaptive tendency is central to thriving in that it results in an improved approach to future adversity. Similarly, Hazan and Shaver reason that social support provides a solid base of security for human endeavor. The idea of human endeavor here is echoed in another of Meichenbaum's facets of posttraumatic growth, new possibilities, the idea being that a person's confidence to "endeavor" in the face of novelty is a sign of thriving.
Concurrent with a third facet of Meichenbaum's posttraumatic growth, personal strength, a meta analysis of six qualitative studies done by Finfgeld focuses on courage as a path to thriving. Evidence from the analysis indicates that the ability to be courageous includes acceptance of reality, problem-solving, and determination. This not only directly supports the significance of personal strength in thriving, but can also be drawn to Meichenbaum's concept of "new possibilities" through the idea that determination and adaptive problem-solving aid in constructively confronting new possibilities. Besides this, it was found in Finfgeld's study that courage is promoted and sustained by intra- and interpersonal forces, further supporting Meichenbaum's concept of "relating to others" and its effect on thriving.
On Meichenbaum's idea of appreciation for life, research done by Tyson on a sample of people 2–5 years into grieving processing reveals the importance of creating meaning. The studies show that coping with bereavement optimally does not only involve just "getting over it and moving on", but should also include creating meaning to facilitate the best recovery. The study showed that stories and creative forms of expression increase growth following bereavement. This evidence is supported strongly by work done by Michael and Cooper focused on facets of bereavement that facilitate growth including "the age of the bereaved", "social support", "time since death", "religion", and "active cognitive coping strategies". The idea of coping strategies is echoed through the importance thriving places on improving adaptability. The significance of social support to growth found by Michael and Cooper clearly supports Meichenbaum's concept of "relating to others". Similarly, the significance of religion echoes Meichenbaum's "spiritual change" facet of posttraumatic growth.
Comparison-based thinking has been shown to aid in the development of posttraumatic growth, in which a person considers the positive differences between their current lives and their life during a traumatic event. Increases in empathy and desire to help others have been observed in trauma survivors as a form of posttraumatic growth. Storytelling with fellow community members, particularly those who have been through similar trauma, can help form a sense of community and encourage self-reflection.
Criticisms, Concerns, and Objective Evidence of PTG
While posttraumatic growth is commonly self-reported by people from different cultures across the world, concerns were raised on the basis that objectively measurable evidence of posttraumatic growth was limited. This led some to the question of whether posttraumatic growth was real or illusory. The concept that posttraumatic growth can be illusory was originally posed by Andreas Maercker and Tanja Zoellner, who suggested that perceptions of PTG manifest itself in two sides: a transformative, constructive side, and an illusory, self-deceptive side. This self-deception side is used as a mechanism of coping with, or making sense of, a traumatic event in one's life, rather than proof of an improved psychological state. Additionally, Adriel Boals suggests a third branch of PTG: perceived PTG, under which illusory and "genuine" PTG fall . Boals asserts that those with perceived PTG often misreport genuine PTG during self-reports, as they are instead experiencing illusory PTG. Indeed, Boals claims that illusory PTG is more common in individuals with perceived PTG, than is genuine PTG. Furthermore, while a meta-analysis by Shakespeare-Finch and Lurie-Beck found PTG has a strong curvilinear relationship with PTSD (indicating PTG is highest when PTSD is moderate), numerous studies have shown that PTG is positively associated with posttraumatic stress, which authors such as Boals suggest is a contradiction of the original definition of PTG.
More recently, evidence of the objectively measurable existence of PTG has begun to emerge. A range of biological research is finding real differences between individuals with and without PTG at the level of gene expression and brain activity.
See also
Post-traumatic stress disorder
Positive disintegration
Psychological trauma
Psychological resilience
References
Bibliography
Personal development
Psychological concepts | 0.767091 | 0.990366 | 0.759701 |
Adaptation model of nursing | In 1976, Sister Callista Roy developed the Adaptation Model of Nursing, a prominent nursing theory. Nursing theories frame, explain or define the practice of nursing. Roy's model sees the individual as a set of interrelated systems (biological, psychological and social). The individual strives to maintain a balance between these systems and the outside world, but there is no absolute level of balance. Individuals strive to live within a unique band in which he or she can cope adequately.
Overview of the theory
This model comprises the four domain concepts of person, health, environment, and nursing; it also involves a six-step nursing process. Andrews & Roy (1991) state that the person can be a representation of an individual or a group of individuals. Roy's model sees the person as "a biopsychosocial being in constant interaction with a changing environment". The person is an open, adaptive system who uses coping skills to deal with stressors. Roy sees the environment as "all conditions, circumstances and influences that surround and affect the development and behaviour of the person". Roy describes stressors as stimuli and uses the term residual stimuli to describe those stressors whose influence on the person is not clear. Originally, Roy wrote that health and illness are on a continuum with many different states or degrees possible. More recently, she states that health is the process of being and becoming an integrated and whole person. Roy's goal for nursing is "the promotion of adaptation in each of the four modes, thereby contributing to the person's health, quality of life and dying with dignity". These four modes are physiological, self-concept, role function and interdependence.
Roy employs a six-step nursing process: assessment of behaviour; assessment of stimuli; nursing diagnosis; goal setting; intervention and evaluation. In the first step, the person's behaviour in each of the four modes is observed. This behaviour is compared with norms and is deemed either adaptive or ineffective. The second step is concerned with factors that influence behaviour. Stimuli are classified as focal, contextual or residual. The nursing diagnosis is the statement of the ineffective behaviours along with the identification of the probable cause. This is typically stated as the nursing problem related to the focal stimuli, forming a direct relationship. In the fourth step, goal setting is the focus. Goals need to be realistic and attainable and are set in collaboration with the person. There are usually both short term and long-term goals that the nurse sets for the patient. Intervention occurs as the fifth step, and this is when the stimuli are manipulated. It is also called the 'doing phase' . In the final stage, evaluation takes place. The degree of change as evidenced by change in behaviour, is determined. Ineffective behaviours would be reassessed, and the interventions would be revised.
The model had its inception in 1964 when Roy was a graduate student. She was challenged by nursing faculty member Dorothy E. Johnson to develop a conceptual model for nursing practice. Roy's model drew heavily on the work of Harry Helson, a physiologic psychologist. The Roy adaptation model is generally considered a "systems" model; however, it also includes elements of an "interactional" model. The model was developed specifically for the individual client, but it can be adapted to families and to communities (Roy, 1983). Roy states (Clements and Roberts, 1983) that "just as the person as an adaptive system has input, output. and internal processes so too the family can be described from this perspective."
Basic to Roy's model are three concepts: the human being, adaptation, and nursing. The human being is viewed as a biopsychosocial being who is continually interacting with the environment. The human being's goal through this interaction is adaptation. According to Roy and Roberts (1981, p. 43), ‘The person has two major internal processing subsystems, the regulator and the cognator." These subsystems are the mechanisms used by human beings to cope with stimuli from the internal and external environment. The regulator mechanism works primarily through the autonomic nervous system and includes endocrine, neural, and perception pathways. This mechanism prepares the individual for coping with environmental stimuli. The cognator mechanism includes emotions, perceptual/information processing, learning, and judgment. The process of perception bridges the two mechanisms (Roy and Roberts, 1981).
Types of Stimuli
Three types of stimuli influence an individual's ability to cope with the environment. These include focal stimuli, contextual stimuli, and residual stimuli. Focal stimuli are those that immediately confront the individual in a particular situation. Focal stimuli for a family include individual needs; the level of family adaptation; and changes within the family members, among the members and in the family environment (Roy, 1983). Contextual stimuli are those other stimuli that influence the situation. Residual stimuli include the individual's beliefs or attitudes that may influence the situation. Many times this is the nurse's "hunch" about other factors that can affect the problem. Contextual and residual stimuli for a family system include nurturance, socialization, and support (Roy, 1983). Adaptation occurs when the total stimuli fall within the individual's/family's adaptive capacity, or zone of adaptation. The inputs for a family include all of the stimuli that affect the family as a group. The outputs of the family system are three basic goals: survival, continuity, and growth (Roy, 1983). Roy states (Clements and Roberts, 1983):
Since adaptation level results from the pooled effect of all other relevant stimuli, the nurse examines the contextual and residual stimuli associated with the focal stimulus to ascertain the zone within which positive family coping can take place and to predict when the given stimulus is outside that zone and will require nursing intervention.
Four Modes of Adaptation
Levine believes that an individual's adaptation occurs in four different modes. This also holds true for families (Hanson, 1984). These include the physiologic mode, the self-concept mode, the role function mode, and the interdependence mode.
The individual's regulator mechanism is involved primarily with the physiologic mode, whereas the cognator mechanism is involved in all four modes (Roy and Roberts, 1981). The family goals correspond to the model's modes of adaptation: survival = physiologic mode; growth = self-concept mode; continuity = role function mode. Transactional patterns fall into the interdependence mode (Clements and Roberts, 1983).
In the physiologic mode, adaptation involves the maintenance of physical integrity. Basic human needs such as nutrition, oxygen, fluids, and temperature regulation are identified with this mode (Fawcett, 1984). In assessing a family, the nurse would ask how the family provides for the physical and survival needs of the family members.
A function of the self-concept mode is the need for maintenance of psychic integrity. Perceptions of one's physical and personal self are included in this mode. Families also have concepts of themselves as a family unit. Assessment of the family in this mode would include the amount of understanding provided to the family members, the solidarity of the family, the values of the family, the amount of companionship provided to the members, and the orientation (present or future) of the family (Hanson, 1984).
The need for social integrity is emphasized in the role function mode. When human beings adapt to various role changes that occur throughout a lifetime, they are adapting in this mode. According to Hanson (1984), the family's role can be assessed by observing the communication patterns in the family. Assessment should include how decisions are reached, the roles and communication patterns of the members, how role changes are tolerated, and the effectiveness of communication (Hanson, 1984). For example, when a couple adjusts their lifestyle appropriately following retirement from full-time employment, they are adapting in this mode.
The need for social integrity is also emphasized in the interdependence mode. Interdependence involves maintaining a balance between independence and dependence in one's relationships with others. Dependent behaviors include affection seeking, help seeking, and attention seeking. Independent behaviors include mastery of obstacles and initiative taking. According to Hanson (1984), when assessing this mode in families, the nurse tries to determine how successfully the family lives within a given community. The nurse would assess the interactions of the family with the neighbors and other community groups, the support systems of the family, and the significant others (Hanson, 1984).
The goal of nursing is to promote adaptation of the client during both health and illness in all four of the modes. Actions of the nurse begin with the assessment process, The family is assessed on two levels. First, the nurse makes a judgment with regard to the presence or absence of maladaptation. Then, the nurse focuses the assessment on the stimuli influencing the family's maladaptive behaviors. The nurse may need to manipulate the environment, an element or elements of the client system, or both in order to promote adaptation.
Many nurses, as well as schools of nursing, have adopted the Roy adaptation model as a framework for nursing practice. The model views the client in a holistic manner and contributes significantly to nursing knowledge. The model continues to undergo clarification and development by the author.
Applying Roy’s Model to Family Assessment
When using Roy's model as a theoretical framework, the following can serve as a guide for the assessment of families.
I. Adaptation Modes
A. Physiologic Mode
1. To what extent is the family able to meet the basic survival needs of its members?
2. Are any family members having difficulty meeting basic survival needs?
B. Self-Concept Mode
1. How does the family view itself in terms of its ability to meet its goals and to assist its members to achieve their goals? To what extent do they see themselves as self-directed? Other directed?
2. What are the values of the family?
3. Describe the degree of companionship and understanding given to the family members
C. Role Function Mode
1. Describe the roles assumed by the family members.
2. To what extent are the family roles supportive, in conflict, reflective of role overload?
3. How are family decisions reached?
D. Interdependence Mode
1. To what extent are family members and subsystems within the family allowed to be independent in goal identification and achievement (e.g., adolescents)?
2. To what extent are the members supportive of one another?
3. What are the family's support systems? Significant others?
4. To what extent is the family open to information and assistance from outside the family unit? Willing to assist other families outside the family unit?
5. Describe the interaction patterns of the family In the community.
II. Adaptive Mechanisms
A. Regulator: Physical status of the family in terms of health? i.e., nutritional state, physical strength, availability of physical resources
B. Cognator: Educational level, knowledge base of family, source of decision making, power base, degree of openness in the system to input, ability to process
III. Stimuli
A. Focal
1. What are the major concerns of the family at this time?
2. What are the major concerns of the individual members?
3. This is usually related to the nursing diagnoses or the main stimuli causing the problem behaviors. It is important for the nurse to try to fix this before they can fix the problem behaviors as they are related to each other.
B. Contextual
1. What elements in the family structure, dynamic, and environment are impinging on the manner and degree to which the family can cope with and adapt to their major concerns (i.e., financial and physical resources, presence or absence of support systems, clinical setting and so on)?
These can be either negative or positive as it relates to the main nursing problem.
C. Residual
1. What knowledge, skills, beliefs, and values of this family must be considered as the family attempts to adapt (i.e., stage of development, cultural background, spiritual/religious beliefs, goals, expectations)? This is normally an assumption that the nurse has that could impact care. One could describe it as one's educational guess about something going on in the patient's life that could be further contributing to the problem.
The nurse assesses the degree to which the family's actions in each mode are leading to positive coping and adaptation to the focal stimuli. If coping and adaptation are not health promoting, assessment of the types of stimuli and the effectiveness of the regulators provides the basis for the design of nursing interventions to promote adaptation.
By answering each of these questions in each assessment, a nurse can have a full understanding of the problem's a patient may be having. It is important to recognize each stimuli because without it, not every aspect of the person's problem can be confronted and fixed. As a nurse, it is their job to recognize all of these modes, mechanisms, and stimuli while taking care of a patient. They do so through the use of their advanced knowledge of the nursing process as well as with interviews with the individuals and the family members.
Callista Roy maintains there are four main adaptation systems, which she calls modes of adaptation. She calls these the
1. the physiological - physical system
2. the self-concept group identity system
3. the role mastery/function system
4. the interdependency system.
See also
Nursing theory
References
Bibliography
External links
Roy's faculty profile, Boston College
Nursing theory | 0.775734 | 0.97933 | 0.7597 |
Subsets and Splits