title
stringlengths
3
82
text
stringlengths
621
92.1k
relevans
float64
0.76
0.83
popularity
float64
0.93
1
ranking
float64
0.75
0.83
Glocalization
Glocalization or glocalisation (a portmanteau of globalization and localism) is the "simultaneous occurrence of both universalizing and particularizing tendencies in contemporary social, political, and economic systems". The concept comes from the Japanese word dochakuka and "represents a challenge to simplistic conceptions of globalization processes as linear expansions of territorial scales. Glocalization indicates that the growing importance of continental and global levels is occurring together with the increasing salience of local and regional levels." Glocalization represents the fusion of "globalization" and "localization," emphasizing the need for global entities to tailor their offerings to suit the unique characteristics of individual regions or communities. Glocal, an adjective, by definition means "reflecting or characterized by both local and global considerations". The term "glocal management" in a sense of "think globally, act locally" is used in the business strategies of companies, in particular by Japanese companies that are expanding overseas. Variety of uses Individuals, households and organizations maintaining interpersonal social networks that combine extensive local and long-distance interactions. The declaration of a specified locality – a town, city, or state – as world territory, with responsibilities and rights on a world scale: a process that started in France in 1950 and originally called mundialization. History of the concept The concept comes from the Japanese word dochakuka, which means global localization. It had referred to the adaptation of farming techniques to local conditions. It became a buzzword when Japanese business adopted it in the 1980s. The word stems from Manfred Lange, head of the German National Global Change Secretariat, who used "glocal" in reference to Heiner Benking's exhibit Blackbox Nature: Rubik's Cube of Ecology at an international science and policy conference. "Glocalization" first appeared in a late 1980s publication of the Harvard Business Review. At a 1997 conference on "Globalization and Indigenous Culture", sociologist Roland Robertson stated that glocalization "means the simultaneity – the co-presence – of both universalizing and particularizing tendencies". The term entered use in the English-speaking world via Robertson in the 1990s, Canadian sociologists Keith Hampton and Barry Wellman in the late 1990s and Zygmunt Bauman. Erik Swyngedouw was another early adopter. Since the 1990s, glocalization has been productively theorized by several sociologists and other social scientists, and may be understood as a process that combines the concerns of localism with the forces of globalization, or a local adaptation and interpretation of global forces. As a theoretical framework, it is compatible with many of the concerns of postcolonial theory, and its impact is particularly recognizable in the digitization of music and other forms of cultural heritage. The concept has since been used in the fields of geography, sociology, and anthropology. It is also a prominent concept in business studies, particularly in the area of marketing goods and services to a heterogenous set of consumers. Sociology The concept of glocalization is included in the discourse on social theory. This is first demonstrated in the way it challenges the notion that globalization overrides locality by describing how the concept of local is said to be constructed on a trans- or uper-local basis or is promoted from the outside. There is also the position that the association of temporal and spatial dimensions to human life, which emerge in globalization, exert little impact. Glocalization is also said to capture the emergence of unique new indigenous realities that result in the interpenetration of the global and local spheres. The term ‘glocklization’, combining the glocal concept with a Glock pistol, was coined in 2018 to indicate forms of glocalization that are perceived as unbalanced and destructive to local cultural heritage. Additionally, the concept of glocalization has strong ties to the more commonly understood term globalization, and has been described as a more general treatment of the term. Elements unique to glocalization under this umbrella include the idea that diversity is the essence of social life, that not all differences are erased, history and culture operate autonomously to offer a sense of uniqueness to the experiences of groups (whether cultures, societies or nations), glocalization removes the fear that globalization resembles a tidal wave erasing all differences, and that glocalization does not promise a world free from conflict but offers a more historically grounded and pragmatic worldview. Religion Glocalization can be represented throughout virtually every sphere of social society, including religion. An example of this can be seen in a study that focused on the differences in Islam in various regions of the world. In this particular study, observations made between the religious pillars in Indonesia and Morocco indicated a significant difference in religious form between the two, blending the fundamental roots with indigenous traditions and local customs. Similar studies have found that regions of the world practicing Christianity and Buddhism experienced similar shifts based on local cultural practices and norms. Business While the term "glocalization" is one that developed later in the 20th century, the idea behind it is closely related to the economic and marketing term known as micromarketing – by definition, the "tailoring and advertising of goods and services on a global or near-global basis to increasingly differentiated local and particular markets." Tourism Glocalization can be recognized, perhaps most profoundly, in tourism operations throughout the world – particularly in reference to countries in which tour guides and locals are up to date on global pop culture and technology, but still present their communities, heritage, history and culture as distinctively "local." A notable example is referenced by Professor Noel Salazar of the University of Pennsylvania, whose study dove into these distinctive glocalization attributes on the island of Java in Indonesia. Challenges Glocalization works best for companies which have decentralized authority. The cost to the companies increases as they cannot standardise products and projects, different cultures have different needs and wants which is highlighted in this challenge. An example of a company succeeding in creating new products for their emerging market is McDonald's new rice meals in India and China. This shows that McDonald's has done research on and understands their new market's requirements for a successful takeaway food. This however can be very costly and time-consuming. One of the main challenges for the future of glocalization is to govern it. Glocal governance is the interlinkage between global, national and local formal and informal actors that aim to find common ground, take decisions, implement and enforce them. An example of a global business that has faced challenges due to localization of their products can be presented through the closing of a Starbucks in the Forbidden City of China in 2007. Starbucks' attempt to localize into the culture of China by accommodating their menu to local elements such as serving green tea frappuccinos and enlarging their stores was prevalent in most areas of China, but when Starbucks spread to the Forbidden City, a problem surrounding cultural identity arose. Factors surrounding "western influences" related to Starbucks were seen as a threat by a web-based campaign which was successful in initiating the closing of the Starbucks in the Forbidden City. The leader of this campaign, Rui, stated, "All I want is that Starbucks move out of the Forbidden City peacefully and quietly, and we'll continue enjoying Starbucks coffee elsewhere in the city." Although there are many challenges to globalisation, when done right it has many benefits; allowing companies to reach a larger target market is just one of them. Society also benefits when globalisation occurs as an increase in market competition generally pushes the price of products down which means the consumers benefit by gaining a lower price point. This decreases the inequality gap as people who couldn't previously afford products when the market was controlled by local monopolies are able to purchase the product more cheaply. Although globalisation has benefits for the consumer, it does not always benefit the producer, with newer and smaller companies struggling to keep up with the low production costs of the multi-national competitors. This results in either a higher price and loss of consumers, or a lower profit margin, which in turn results in less competition within the market. Agriculture Glocalization is also occurring within the agricultural sphere. One case brought up by of this has been in soy farming. Previously, there were numerous small-scale soy farms along the east coast of the US. However, as larger corporations outcompeted smaller ones, attentions have been turned abroad. Anthropologist Andrew Ofstehage refers to this change from small, personal farms to large corporate ones as an aspect of "financialization". Ofstehage expands on this concept by giving the example of the current soy market in Brazil. As financialization has led to land being more expensive and harder to come by in the states, farmers have turned their attention abroad. This farm crisis in the US was a result of increasingly large corporate farms driving out small family farms and acquiring more and more land. Due to this, farmers both new and experienced who are privileged to have capital or investors, have turned their attention abroad. Many have begun to invest in Brazilian land to grow soy with the money from friends, investors, neighbors, or savings. These transnational farmers have had great success but as more farmers have followed these steps the cycle has begun anew. Looking to further expand, farmers often take three paths for further profit and financialization. They either sell their Brazilian farms to a new hopeful farmer, or they keep their farm but return to the states to manage it internationally, or they truly begin anew. Specifically, the farmers sell their Brazilian land and turn Piauí or Tocantins, places where soy grows well, and land is still cheap. Education Glocalization of education has been proposed in the specific areas of politics, economics, culture, teaching, information, organization, morality, spirituality, religion and "temporal" literacy. The recommended approach is for local educators to consult global resources for materials and techniques and then adapt them for local use. For example, in information, it involves advancing computer and media understanding to allow students and educators to look beyond their local context. Media Thomas Friedman in The World Is Flat talks about how the Internet encourages glocalization, such as encouraging people to make websites in their native languages. Television Besides the usage of Internet, television and commercials have become useful strategies that global companies have used to help localize their products. Companies, such as McDonald's, have relied on television and commercials in not only the Western Hemisphere but in other parts of the world to attract a varying range of audiences in accordance with the demographic of the local area. For example, they have used mascots ranging anywhere from a male clown in the Western Hemisphere to attract younger audiences to an "attractive" female clown in Japan to attract older audiences. Video games Some translators of video games favor glocalization over the process of localization in video games. In this context, glocalization seeks from the outset to minimize localization requirements for video games intended to be universally appealing. Academic Douglas Eyman cites the Mists of Pandaria expansion for World of Warcraft as an example of glocalization because it was designed at the outset to appeal to global audiences while celebrating Chinese culture. Community organization Glocalization, or glocalism, in community organization refers to community organizing that sees social problems as neither local or global, but interdependent and interconnected (glocal), necessitating organizing practices that concurrently address local problems and global issues. Glocal organizing techniques are commonly associated with The New Community Organizing, and are distinguished from other methods by emphasizing "play, creativity, joy, peer-based popular education, cultural activism, and a healthy dose of experimentation." One of the most common glocal models of practice, functional community organization, seeks to organize communities (functional communities) around a function (i.e., a need, interest, or common problem that glocally affects people). Functional community organization emphasizes a deep understanding of issues (e.g., power, empowerment, and community interests), strategies for change (e.g., popular education, direct action, and collaboration), and communication strategies that promote "inclusive networking." The goals of functional community organization are to organize communities through direct action in order to meet immediate community need while addressing glocalized problems. In so doing, functional communities act as their own unique forms of protest, vehicles for community empowerment, and alternatives to institutionalized social welfare systems. Popular examples of functional communities include community projects such as community gardens, Community Technology Centers, gift economy markets, food sharing, and other forms franchise activism and mutual aid. See also Americanization Cultural homogenization Internationalization and localization McDonaldization Mobile privatization Fratelli Tutti Notes Further reading Sarroub, L. K. (2009). "Glocalism in literacy and marriage in transnational lives". Critical Inquiry in Language Studies (Special Issue: Immigration, Language, and Education) 6(1-2), 63–80. Hollensen, S. (2016). Global Marketing, Pearson. Bekh, K. (2016). A Company’s Marketing Mix in Terms of Glocal Marketing. Baltic Journal of Economic Studies, 2(5), 10-15 (in Web of Science: https://www.webofscience.com/wos/author/record/2340108; http://www.baltijapublishing.lv/index.php/issue/article/view/138) Livholts, M., & Bryant, L. (Eds.). (2017). Social Work in a Glocalised World. Routledge. External links The Glocal and Global Studies, Glocalizations 2015, Victor Roudometof (2015), Taylor & Francis 2015, Global Change exhibition (May, 1990), and the poster on local and global change which a year later was the title for the "Local and Global Change" exhibition (1991) Glocalization links markets that are geographically dispersed and culturally distinct www.glocalmatters.org Neologisms Globalization 1980s neologisms New Urbanism Localism (politics) Social movements Glocalism: Journal of Culture, Politics and Innovation (ISSN 2283-7949): https://riviste.unimi.it/index.php/glocalism
0.765463
0.986873
0.755414
Holacracy
Holacracy is a method of decentralized management and organizational governance, which claims to distribute authority and decision-making through a holarchy of self-organizing teams rather than being vested in a management hierarchy. Holacracy has been adopted by for-profit and non-profit organizations in several countries. This can be seen as a greater movement within organisational design to cope with increasing complex social environments, that promises a greater degree of transparency, effectiveness and agility. Origins The term is found in print for the first time in the adjectival form holocratic in a book from the Collège de 'Pataphysique in May 1957. The Holacracy system was developed at Ternary Software in Exton, Pennsylvania. Ternary founder Brian Robertson distilled the company's best practices into an organizational system that became known as Holacracy in 2007. Robertson later developed the "Holacracy Constitution" which lays out the core principles and practices of the system. In 2011, he released a Manifesto 16 of Holacracy which was later developed in June 2015, as the book Holacracy: The New Management System for a Rapidly Changing World, that details and explains his practices. He claims that it resembles Scaled Agile Framework, Sociocracy and Nexus. Robertson claims that the term holacracy is derived from the term holarchy; the latter was coined by Arthur Koestler in his 1967 book The Ghost in the Machine. Koestler wrote that a holarchy is composed of holons (Greek: ὅλον, holon neuter form of ὅλος, holos "whole") or units that are autonomous and self-reliant, but also dependent on the greater whole of which they are part. Thus a holarchy is a hierarchy of self-regulating holons that function both as autonomous wholes and as dependent parts. Influences and comparable systems Holacracy, which is an alternative to command-and-control, is one of several systems of flat organization. It has been compared to sociocracy, a system of governance developed in the second half of the 20th century. Essential elements Roles instead of job descriptions: The building blocks of Holacracy's organizational structure are roles. Holacracy distinguishes between roles and the people who fill them, as one individual can hold multiple roles at any given time. A role is not a job description; its definition follows a clear format including a name, a purpose, optional "domains" to control, and accountabilities, which are ongoing activities to perform. Roles are defined by each circle—or team—via a collective governance process, and are updated regularly in order to adapt to the ever-evolving needs of the organization. Circle structure: Holacracy structures the various roles in an organization in a system of self-organizing (but not self-directed) circles. Circles are organized hierarchically, and each circle is assigned a clear purpose and accountabilities by its broader circle. However, each circle has the authority to self-organize internally to best achieve its goals. Circles conduct their own governance meetings, assign members to fill roles, and take responsibility for carrying out work within their domain of authority. Circles are connected by two roles known as "lead link" and "rep link", which sit in the meetings of both their circle and the broader circle to ensure alignment with the broader organization's mission and strategy. Governance process: Each circle uses a defined governance process to create and regularly update its own roles and policies. Holacracy specifies a structured process known as "integrative decision making" for proposing changes in governance and amending or objecting to proposals. This is not a consensus-based system, not even a consent-based system, but one that integrates relevant input from all parties and ensures that the proposed changes and objections to those changes are anchored in the roles' needs (and through them, the organization's needs), rather than people's preferences or ego. Operational process: Holacracy specifies processes for aligning teams according to operational needs, and requires that each member of a circle fulfill certain duties in order to work efficiently and effectively together. There are also key roles to help organise the process and workflow of each circle including Facilitator, Secretary, Lead Link, and Rep Link. In contrast to the governance process, which is collective and integrative, each member filling a role has a lot of autonomy and authority to make decisions on how to best achieve his or her goals. Some have described the authority paradigm in Holacracy as completely opposite to the one of the traditional management hierarchy; instead of needing permission to act or innovate, Holacracy gives blanket authority to take any action needed to perform the work of the roles, unless it is restricted via policies in governance or it involves spending some assets of the organization (money, intellectual property, etc.) Holacracy is highly biased toward action and innovation: it defaults to autonomy and freedom, then uses internal processes to limit that autonomy when its use in a specific way turns out to be detrimental. Holacracy specifies a tactical meeting process that every circle goes through, usually on a weekly basis. A particular feature of this last phase, known as "triage", is to focus discussions on the concrete next steps needed by the individual who added the agenda item to address his or her issue. The intention is to avoid large, unproductive discussions dominated by the louder voices. Its developer was described by The New York Times as "a computer programmer with no training in human resources, let alone occupational psychology" and The Wall Street Journal identified the requirement for "every decision must be unanimous" as detrimental. They also reported that "Fifteen percent of an organization’s time is spent in" ($27 billion of them "unproductive") meetings and made mention of Robertson's book. Contemporary practice In the U.S., for-profit and not-for-profit organizations have adopted and practiced Holacracy. Examples include Zappos. Medium used Holacracy for several years before abandoning it in 2016. A small number of research projects have reported the use of this style of management within the area of software development who promote its benefits for the search for greater innovation but raise concerns such as lack of usual structures and cultural habits around organising work, but more research is needed. The New York Times wrote in 2015 that "The goal of Holacracy is to create a dynamic workplace where everyone has a voice and bureaucracy doesn’t stifle innovation." The Wall Street Journal had already asked in 2007 "Can a Company Be Run as a Democracy?" (and conceded that it "sounds like a recipe for anarchy"). The answer reported came when 18 percent of the employees at an online seller which had adopted this "radical self-management system" quit. Claimed advantages Various claims have been made in respect of Holacracy. It is said to increase agility, efficiency, transparency, innovation and accountability within an organization, and to encourage individual team members to take initiative and gives them a process in which their concerns or ideas can be addressed. Further that the system of distributed authority reduces the burden on leaders to make every decision and can speed up communications and decision making processes (but this can introduce its own challenges). According to Zappos's CEO Tony Hsieh, Holacracy makes individuals more responsible for their own thoughts and actions. Criticisms Steve Denning warned against viewing Holacracy as a panacea, claiming that instead of removing hierarchy, decisions are funneled down from circle to circle in a clear hierarchy, with each subsequent circle knowing less about the big picture than the one above. He also claimed that the rules and procedures are very detailed and focused on "administrivia." Lastly, Denning added that the voice of the customer was missing from the Holacracy model, concluding that for agile and customer-focused companies such as Zappos, Holacracy is a way to add administrative rigor, but that Holacracy would not necessarily work well in an organization that did not already have agility and passion for the customer. HolacracyOne partner Olivier Compagne replied to those criticisms on the company's blog, claiming that Denning's criticisms misunderstand Holacracy. Problems occur when transitioning to this system, particularly if older systems of management are allowed to become a hidden structure and system of power, in addition to this, individuals' space can become lost within the constant connectedness. In moving away from Holacracy, Andy Doyle of Medium noted that "for larger initiatives, which require coordination across functions, it can be time-consuming and divisive to gain alignment" and that Medium believed that "the act of codifying responsibilities in explicit detail hindered a proactive attitude and sense of communal ownership". They also noted that the inaccurate media coverage of Holacracy created a challenge for recruitment. At Zappos, about 14% of the company left voluntarily in 2015 in a deliberate attempt by Zappos to only retain employees who believed in holacracy. Other criticisms include a "one-size-fits-all" approach, layers of bureaucracy and more psychological weight. See also Collaborative e-democracy Corporatism Dee Hock (in particular re chaordic) Hierarchical organization Industrial democracy Open source governance Robert’s Rules of Order Sociocracy Waterfall model Workers' self-management References External links Holacracy website (2006) Interview with Brian Robertson on Holacracy Egalitarianism Group processes Management cybernetics Organizational structure Organization design Social systems Types of democracy
0.761614
0.991848
0.755405
Communitas
Communitas is a Latin noun commonly referring either to an unstructured community in which people are equal, or to the very spirit of community. It also has special significance as a loanword in cultural anthropology and the social sciences. Victor Turner, who defined the anthropological usage of communitas, was interested in the interplay between what he called social 'structure' and 'antistructure'; Liminality and Communitas are both components of antistructure. Communitas refers to an unstructured state in which all members of a community are equal allowing them to share a common experience, usually through a rite of passage. Communitas is characteristic of people experiencing liminality together. This term is used to distinguish the modality of social relationship from an area of common living. There is more than one distinction between structure and communitas. The most familiar is the difference of secular and sacred. Every social position has something sacred about it. This sacred component is acquired during rites of passages, through the changing of positions. Part of this sacredness is achieved through the transient humility learned in these phases, this allows people to reach a higher position. Victor and Edith Turner Communitas is an acute point of community. It takes community to the next level and allows the whole of the community to share a common experience, usually through a rite of passage. This brings everyone onto an equal level: even if you are higher in position, you have been lower and you know what that is. Turner (1969, Pg.132; see also ) distinguishes between: existential or spontaneous communitas, the transient personal experience of togetherness; e.g. that which occurs during a counter-culture happening. normative communitas, which occurs as communitas is transformed from its existential state to being organized into a permanent social system due to the need for social control. ideological communitas, which can be applied to many utopian social models. Communitas as a concept used by Victor Turner in his study of ritual has been criticized by anthropologists such as John Eade and Michael J. Sallnow's book Contesting the Sacred (1991). Edith Turner, Victor's widow and anthropologist in her own right, published in 2011 a definitive overview of the anthropology of communitas, outlining the concept in relation to the natural history of joy, including the nature of human experience and its narration, festivals, music and sports, work, disaster, the sacred, revolution and nonviolence, nature and spirit, and ritual and rites of passage. Paul and Percival Goodman Communitas is also the title of a book published in 1947 by the 20th-century American thinker and writer Paul Goodman and his brother, Percival Goodman. Their book examines three kinds of possible societies: a society centered on consumption, a society centered on artistic and creative pursuits, and a society which maximizes human liberty. The Goodmans emphasize freedom from both coercion by a government or church and from human necessities by providing these free of cost to all citizens who do a couple of years of conscripted labor as young adults. Roberto Esposito In 1998, Italian philosopher Roberto Esposito published a book under the name Communitas challenging the traditional understanding of this concept. It was translated in English in 2010 by Timothy Campbell. In this book, Esposito offers a very different interpretation of the concept of communitas based on a thorough etymological analysis of the word: "Community isn't a property, nor is it a territory to be separated and defended against those who do not belong to it. Rather, it is a void, a debt, a gift to the other that also reminds us of our constitutive alterity with respect to ourselves." He goes on with his "deconstruction" of the concept of communitas: "From here it emerges that communitas is the totality of persons united not by a "property" but precisely by an obligation or a debt; not by an "addition" but by a "subtraction": by a lack, a limit that is configured as an onus, or even as a defective modality for him who is "affected", unlike for him who is instead "exempt" or "exempted". Here we find the final and most characteristic of the oppositions associated with (or that dominate) the alternative between public and private, those in other words that contrast communitas to immunitas. If communis is he who is required to carry out the functions of an office ― or to the donation of a grace ― on the contrary, he is called immune who has to perform no office, and for that reason he remains ungrateful. He can completely preserve his own position through a vacatio muneris. Whereas the communitas is bound by the sacrifice of the compensatio, the immunitas implies the beneficiary of the dispensatio.""Therefore the community cannot be thought of as a body, as a corporation in which individuals are founded in a larger individual. Neither is community to be interpreted as a mutual, intersubjective "recognition" in which individuals are reflected in each other so as to confirm their initial identity; as a collective bond that comes at a certain point to connect individuals that before were separate. The community isn't a mode of being, much less a "making" of the individual subject. It isn't the subject's expansion or multiplication but its exposure to what interrupts the closing and turns it inside out: a dizziness, a syncope, a spasm in the continuity of the subject." Others For more on this perspective, see also Jean-Luc Nancy's paper "The Confronted Community" as well as his book The Inoperative Community. See also Maurice Blanchot's book The Unavowable Community (1983) which is an answer to Jean-Luc Nancy's Inoperative Community. Giorgio Agamben engages in a similar argument about the concept of community in his 1990 book The Coming Community (translated in English by Michael Hardt in 1993). Rémi Astruc, a French scholar, recently proposed in his essay Nous? L'aspiration à la Communauté et les arts (2015), to operate a distinction between Community with a capital C as the longing for communitas and communities (plural and small c) to name the numerous actualizations in human societies. Finally, on the American side, see The Community of Those Who Have Nothing in Common by Alphonso Lingis. Christian author Alan Hirsch used the term to describe a more active, tighter-knit community in his book "The Forgotten Ways: Reactivating the Missional Church." References Further reading Read the introduction from Roberto Esposito's book Communitas. The Origin and Destiny of Community : Introduction: Nothing In Common Turner, Victor. "Rituals and Communitas." Creative Resistance. 26 Nov. 2005 Eade & Sallnow, 'Contesting the Sacred' (1991) Carse, James P. "The Religious Case Against Belief", Penguin, New York, 2008 Community Spirituality Latin words and phrases es:Communitas
0.77497
0.974741
0.755395
Geragogy
Geragogy (also geragogics) is a theory which argues that older adults are sufficiently different that they warrant a separate educational theory. The term eldergogy has also been used. Some critics have noted that "one should not expect from geragogy some comprehensive educational theory for older adult learners, but only an awareness of and sensitivity towards gerontological issues". Key distinctions between traditional pedagogy and geragogy include offering "opportunities for older adult learners to set the curriculum themselves and to learn through activities of personal relevance" as well as recognition of age-related issues which may affect learning, such as reduced sensory perception, limited motor capabilities and changes in cognitive processes, especially memory. Collaborative peer learning, as employed in the University of the Third Age, is a common element within geragogic settings. Principles of geragogy From John, Martha T. (1988). Geragogy: A theory for teaching the elderly: Learning should aim to provide skills and resources which maintain personal independence. Useful, practical outcomes must therefore be highlighted before a course of study begins, and any assigned tasks must have meaning for older adults. Enjoyment, curiosity, seeking information and desiring communication are typical routes into learning. Variety in teaching methods is required, rather than reliance on lengthy verbal presentations. A flexible, interdisciplinary approach which responds to the needs of the learners present is vital. Tutors should strive to maintain a clear focus on the topic, limiting the number of ideas presented. Irrelevant or overly distracting concepts should be avoided. In place of discipline or rote-learning, tutors should stimulate engagement with warmth, positive comments, approval and encouragement. Learners may take longer to complete tasks and assignments than younger people. They may also wish to return repeatedly to a task until they feel comfortable. Examples should be reinforced regularly and often, using differing contexts in order to give as many opportunities as possible for learners to grasp a concept. The past experiences of learners can be useful in grounding their understanding. Tutors should seek to review specific skills which allow each learner to be creative in their own way, building on their personal life experience. It is also important to review information that may have been learned in the past (such as at school) but has not been used for some time. See also References Further reading Battersby, D. (1987). "From andragogy to geragogy". Journal of Educational Gerontology 2(1): 4–10. Berdes, C., Dawson G.D., Zych, A.A. eds. (1992). Geragogics: European research in gerontological education and educational gerontology. New York: The Haworth Press. . Formosa, M. (2002). "Critical gerogogy: Developing practical possibilities for critical educational gerontology". Education and Ageing 17(3): 73–86. John, M.T. (1983). Teaching and loving the elderly. Springfield, IL: Charles C. Thomas. Johnson, L. (2016). Geragogy. In S. Danver (Ed.), The SAGE encyclopedia of online education (pp. 504–508). SAGE Publications, Inc. Lebel, J. (1978). "Beyond Andragogy to geragogy". Lifelong learning: The adult years 1(9): 16–8. Pearson, M. (2011). "Gerogogy in patient education - revisited". The Oklahoma Nurse 56(2): 12–17. Pearson, M. (1986). "Gerogogy in patient education". Home Healthcare Nurse 14(8): 631–636. Pedagogical disciplines Educational stages Adult education
0.798008
0.946551
0.755355
Dialogical self
The dialogical self is a psychological concept which describes the mind's ability to imagine the different positions of participants in an internal dialogue, in close connection with external dialogue. The "dialogical self" is the central concept in the dialogical self theory (DST), as created and developed by the Dutch psychologist Hubert Hermans since the 1990s. Overview Dialogical Self Theory (DST) weaves two concepts, self and dialogue, together in such a way that a more profound understanding of the interconnection of self and society is achieved. Usually, the concept of self refers to something "internal," something that takes place within the mind of the individual person, while dialogue is typically associated with something "external," that is, processes that take place between people involved in communication. The composite concept "dialogical self" goes beyond the self-other dichotomy by infusing the external to the internal and, in reverse, to introduce the internal into the external. As functioning as a "society of mind", the self is populated by a multiplicity of "self-positions" that have the possibility to entertain dialogical relationships with each other. In Dialogical Self Theory (DST) the self is considered as "extended," that is, individuals and groups in the society at large are incorporated as positions in the mini-society of the self. As a result of this extension, the self does not only include internal positions (e.g., I as the son of my mother, I as a teacher, I as a lover of jazz), but also external positions (e.g., my father, my pupils, the groups to which I belong). Given the basic assumption of the extended self, the other is not simply outside the self but rather an intrinsic part of it. There is not only the actual other outside the self, but also the imagined other who is entrenched as the other-in-the-self. An important theoretical implication is that basic processes, like self-conflicts, self-criticism, self-agreements, and self-consultancy, are taking place in different domains in the self: within the internal domain (e.g., "As an enjoyer of life I disagree with myself as an ambitious worker"); between the internal and external (extended) domain (e.g., "I want to do this but the voice of my mother in myself criticizes me") and within the external domain (e.g., "The way my colleagues interact with each other has led me to decide for another job"). As these examples show, there is not always a sharp separation between the inside of the self and the outside world, but rather a gradual transition. DST assumes that the self as a society of mind is populated by internal and external self-positions. When some positions in the self silence or suppress other positions, monological relationships prevail. When, in contrast, positions are recognized and accepted in their differences and alterity (both within and between the internal and external domains of the self), dialogical relationships emerge with the possibility to further develop and renew the self and the other as central parts of the society at large. Historical background DST is inspired by two thinkers in particular, William James and Mikhail Bakhtin, who worked in different countries (USA and Russia, respectively), in different disciplines (psychology and literary sciences), and in different theoretical traditions (pragmatism and dialogism). As the composite term dialogical self suggests, the present theory finds itself not exclusively in one of these traditions but explicitly at their intersection. As a theory about the self it is inspired by William James, as a theory about dialogue it elaborates on some insights of Mikhail Bakhtin. The purpose of the present theory is to profit from the insights of founding fathers like William James, George Herbert Mead and Mikhail Bakhtin and, at the same time, to go beyond them. William James (1890) proposed a distinction between the I and the Me, which, according to Morris Rosenberg, is a classic distinction in the psychology of the self. According to James the I is equated with the self-as-knower and has three features: continuity, distinctness, and volition. The continuity of the self-as-knower is expressed in a sense of personal identity, that is, a sense of sameness through time. A feeling of distinctness from others, or individuality, is also characteristic of the self-as-knower. Finally, a sense of personal volition is reflected in the continuous appropriation and rejection of thoughts by which the self-as-knower manifests itself as an active processor of experience. Of particular relevance to DST is James's view that the Me, equated with the self-as-known, is composed of the empirical elements considered as belonging to oneself. James was aware that there is a gradual transition between Me and mine and concluded that the empirical self is composed of all that the person can call his or her own, "not only his body and his psychic powers, but his clothes and his house, his wife and children, his ancestors and friends, his reputation and works, his lands and horses, and yacht and bank-account". According to this view, people and things in the environment belong to the self, as far as they are felt as "mine". This means that not only "my mother" belongs to the self but even "my enemy". In this way, James proposed a view in which the self is 'extended' to the environment. This proposal contrasts with a Cartesian view of the self which is based on a dualistic conception, not only between self and body but also between self and other. With his conception of the extended self, that defined as going beyond the skin, James has paved the way for later theoretical developments in which other people and groups, defined as "mine" are part of a dynamic multi-voiced self. In the above quotation from William James, we see a constellation of characters (or self-positions) which he sees as belonging to the Me/mine: my wife and children, my ancestors and friends. Such characters are more explicitly elaborated in Mikhail Bakhtin's metaphor of the polyphonic novel, which became a source of inspiration for later dialogical approaches to the self. In proposing this metaphor, he draws on the idea that in Dostoevsky's works there is not a single author at work—Dostoevsky himself—but several authors or thinkers, portrayed as characters such as Ivan Karamazov, Myshkin, Raskolnikov, Stavrogin, and the Grand Inquisitor. These characters are not presented as obedient slaves in the service of one author-thinker, Dostoevsky, but treated as independent thinkers, each with their own view of the world. Each hero is put forward as the author of his own ideology, and not as the object of Dostoevsky's finalizing artistic vision. Rather than a multiplicity of characters within a unified world, there is a plurality of consciousnesses located in different worlds. As in a polyphonic musical composition, multiple voices accompany and oppose one another in dialogical ways. In bringing together different characters in a polyphonic construction, Dostoevsky creates a multiplicity of perspectives, portraying characters conversing with the Devil (Ivan and the Devil), with their alter egos (Ivan and Smerdyakov), and even with caricatures of themselves (Raskolnikov and Svidrigailov). Inspired by the original ideas of William James and Mikhail Bakhtin, Hubert Hermans, Harry Kempen and Rens van Loon wrote the first psychological publication on the "dialogical self" in which they conceptualized the self in terms of a dynamic multiplicity of relatively autonomous I-positions in the (extended) landscape of the mind. In this conception, the I has the possibility to move from one spatial position to another in accordance with changes in situation and time. The I fluctuates among different and even opposed positions, and has the capacity to imaginatively endow each position with a voice so that dialogical relations between positions can be established. The voices function like interacting characters in a story, involved in processes of question and answer, agreement and disagreement. Each of them have a story to tell about their own experiences from their own stance. As different voices, these characters exchange information about their respective Me's and mines, resulting in a complex, narratively structured self. Construction of assessment and research procedures The theory has led to the construction of different assessment and research procedures for investigating central aspects of the dialogical self. Hubert Hermans has constructed the Personal Position Repertoire (PPR) method, an idiographic procedure for assessing the internal and external domains of the self in terms of an organized position repertoire. This is done by offering the participant a list of internal and external self-positions. The participants mark those positions that they feel as relevant in their lives. They are allowed to add extra internal and external positions to the list and phrase them in their own terms. The relationship between internal and external positions is then established by inviting the participants to fill out a matrix with the rows representing the internal positions and the columns the external positions. In the entries of the matrix, the participant fills in, on a scale from 0 to 5 the extent to which an internal position is prominent in the relation to an external position. The scores in the matrix allow for the calculation of a number of indices, such as sum scores representing the overall prominence of particular internal or external positions and correlations showing the extent to which internal (or external) positions have similar profiles. On the basis of the results of the quantitative analysis, some positions can be selected, by the client or assessor, for closer examination. From the selected positions the client can tell a story that reflects the specific experiences associated with that position and, moreover, assessor and client can explore which positions can be considered as a dialogical response to one or more other positions. In this way, the method combines both qualitative and quantitative analyses. Psychometric aspects of the PPR method The psychometric aspects of the PPR method was refined a procedure proposed by A. Kluger, Nir, & Y. Kluger. The authors analyze clients' Personal Position Repertoires by creating a bi-plot of the factors underlying their internal and external positions. A bi-plot provides a clear and comprehensible visual map of the relations between all the meaningful internal and external positions within the self in such a way that both types of positions are simultaneously visible. Through this procedure clusters of internal and external positions and dominant patterns can be easily observed and analyzed. The method allows researchers or practitioners to study the general deep structures of the self. There are multiple bi-plots technologies available today. The simplest approach, however, is to perform a standard principal component analysis (PCA). To obtain a bi-plot, a PCA is once performed on the external positions and once on the internal positions, with the number of components in both PCA's restricted to two. Next, a scatter of the two PCAs is plotted on the same plane, where results of the first components are projected to the X-axis and of the second components to Y-axis. In this way, an overview of the organization of the internal and external positions together is realized. The Personality Web assessment method Another assessment method, the Personality Web, is devised by Raggatt. This semi-structured method starts from the assumption that the self is populated by a number of opposing narrative voices, with each voice having its own life story. Each voice competes with other voices for dominance in thought and action and each is constituted by a different set of affectively-charged attachments, to people, events, objects and one's own body. The assessment comprises two phases— In the first phase, 24 attachments are elicited in four categories: people, events, places and objects, and orientations to body parts. In an interview, the history and meaning of each attachment is explored. In the second phase, participants are invited to group their attachments by strength of association into cluster analysis and multidimensional scaling is used to map the individual's web of attachments. This method represents a combination of qualitative and quantitative procedures that provide insight in the content and organization of a multi-voiced self. Self-Confrontation Method Dialogical relationships are also studied with an adapted version of the Self-Confrontation Method (SCM). Take the following example. A client, Mary, reported that she sometimes experienced herself as a witch, eager to murder her husband, particularly when he was drunk. She did a self-investigation in two parts, one from her ordinary position as Mary and another from the position of the witch. Then, she told from each of the positions a story about her past, present, and future. These stories were summarized in the form of a number of sentences. It appeared that Mary formulated sentences that were much more acceptable from a societal point of view than those from the witch. Mary formulated sentences like "I want to try to see what my mother gives me: there's only one of me" or "For the first time in my life, I'm engaged in making a home ("home" is also coming at home, entering into myself)", whereas the witch produced statements like "With my bland, pussycat qualities I have vulnerable things in hand, from which I derive power at a later moment (somebody tells me things that I can use so that I get what I want)" or "I enjoy when I have broken him [husband]: from a power position entering the battlefield." It was found that the sentences of the two positions were very different in content, style, and affective meaning. Moreover, the relationship between Mary and the witch seemed to be more monological than dialogical, that is, either the one or the other was in control of the self and the situation and there was not no exchange between them. After the investigation, Mary received a therapeutic supervision during which she started to keep a diary in which she learned to make fine discriminations between her own experiences as Mary and those of the witch. She became not only aware of the needs of the witch but learned also to give an adequate response as soon as she noticed that the energy of the witch was upcoming. In a second investigation, one year later, the intensely conflicting relationship between Mary and the witch was significantly reduced and, as a result, there was less tension and stress in the self. She reported that in some situations, she even could make good use of the energy of the witch (e.g., when applying for a job). Whereas in some situations she was in control of the witch, in other situations she could even cooperate with her. The changes that took place between investigation 1 and investigation 2 suggested that the initial monological relationship between the two positions changed clearly into a more dialogical direction. The Initial Questionnaire method Under the supervision of the Polish psychologist Piotr Oleś, a group of researchers constructed a questionnaire method, called the Initial Questionnaire, for the measurement of three types of "internal activity" (a) change of perspective, (b) internal monologue and (c) internal dialogue. The purpose of this questionnaire is to induce the subject's self-reflection and determine which I-positions are reflected by the participant's interlocutors and which of them give new and different points of view to the person. The method includes a list of potential positions. The participants are invited to choose some of them and can add their own to the list. The selected positions, both internal and external ones, are then assessed as belonging to the dialogue, monologue of perspective categories. Such a questionnaire is well-suited for the investigation of correlations with other questionnaires. For example, correlating the Initial Questionnaire with the Revised NEO Personality Inventory (NEO PI-R), the researchers found that persons having inner dialogues scored significantly lower on Assertiveness and higher on Self-Consciousness, Fantasy, Aesthetics, Feelings and Openness than people having internal monologues. They concluded that "people entering into imaginary dialogues in comparison with ones having mainly monologues are characterized by a more vivid and creative imagination (Fantasy), a deep appreciation of art and beauty (Aesthetics) and receptivity to inner feelings and emotions (Feelings). They are curious about both inner and outer worlds and their lives are experientially richer. They are willing to entertain novel ideas and unconventional values and they experience positive as well as negative emotions more keenly (Openness). At the same time these persons are more disturbed by awkward social situations, uncomfortable around others, sensitive to ridicule, and prone to feelings of inferiority (Self-Consciousness), they prefer to stay in the background and let others do the talking (Assertiveness)". Other methods Other methods are developed in fields related to DST. Based on Stiles' assimilation model, "Osatuke et al.", describes a method that enables the researcher to compare what is said by a client (verbal content) and how it is said (speech sounds). With this method the authors are able to assess to what extent the vocal manifestations (how it is said) of different internal voices of the same client parallel, contradict or complement their written manifestations (what is said). This method can be used to study the non-verbal characteristics of different voices in the self in connection with verbal content. Dialogical sequence analysis On the basis of Mikhail Bakhtin's theory of utterances, Leiman devised a dialogical sequence analysis. This method starts from the assumption that every utterance has an addressee. The central question is: To whom is the person speaking? Usually, we think of one listener as the immediately observable addressee. However, the addressee is rather a multiplicity of others, a complex web of invisible others, whose presence can be traced in the content, flow and expressive elements of the utterance (e.g., I'm directly addressing you but while speaking I'm protesting to a third person who is invisibly present in the conversation). When there are more than one addressees present in the conversation, the utterance positions the author/speaker into more (metaphorical) locations. Usually, these locations form sequences, that can be examined and made explicit when one listens carefully not only to the content but also the expressive elements in the conversation. Leiman's method, which analyzes a conversation in terms of "chains of dialogical patterns", is theory-guided, qualitative and sensitive to the verbal and the non-verbal aspects of utterances. Fields of application It is not the main purpose of the presented theory to formulate testable hypotheses, but to generate new ideas. It is certainly possible to perform theory-guided research on the basis of the theory, as exemplified by a special issue on dialogical self research in the Journal of Constructivist Psychology (2008) and in other publications (further on in the present section). Yet, the primary purpose is the generation of new ideas that lead to continued theory, research, and practice on the basis of links between the central concepts of the theory. Theoretical advances, empirical research, and practical applications are discussed in the International Journal for Dialogical Science and at the biennial International Conferences on the Dialogical Self as they are held in different countries and continents: Nijmegen, Netherlands (2000), Ghent, Belgium (2002), Warsaw, Poland (2004), Braga, Portugal (2006), Cambridge, United Kingdom (2008), Athens, Greece (2010), Athens, Georgia, United States (2012), and The Hague, Netherlands (2014).The aim of the journal and the conferences is to transcend the boundaries of (sub)disciplines, countries, and continents and create fertile interfaces where theorists, researchers and practitioners meet in order to engage in innovative dialogue. After initial publication on DST, the theory has been applied in a variety of fields: cultural psychology psychotherapy; personality psychology;psychopathology; developmental psychology; experimental social psychology; autobiography; social work; educational psychology; brain science; Jungian psychoanalysis; history; cultural anthropology; constructivism; social constructionism; philosophy; the psychology of globalization cyberpsychology; media psychology, vocational psychology, and literary sciences. Fields of applications are also reflected by several special issues that appeared in psychological journals. In Culture & Psychology (2001), DST, as a theory of personal and cultural positioning, was exposed and commented on by researchers from different cultures. In Theory & Psychology (2002), the potential contribution of the theory for a variety of fields was discussed: developmental psychology, personality psychology, psychotherapy, psychopathology, brain sciences, cultural psychology, Jungian psychoanalysis, and semiotic dialogism. A second issue of this journal published in 2010 was also devoted to DST. In the Journal of Constructivist Psychology (2003) researchers and practitioners focused on the implications of the dialogical self for personal construct psychology, on the philosophy of Martin Buber, on the rewriting of narratives in psychotherapy, and on a psycho-dramatic approach in psychotherapy. The topic of mediated dialogue in a global and digital age was at the heart of a special issue in Identity: An International Journal of Theory and Research (2004). In Counselling Psychology Quarterly (2006), the dialogical self was applied to a variety of topics, such as, the relationship between adult attachment and working models of emotion, paranoid personality disorder, narrative impoverishment in schizophrenia, and the significance of social power in psychotherapy. In the Journal of Constructivist Psychology (2008) and in Studia Psychologica (2008), groups of researchers addressed the question of how empirical research can be performed on the basis of DST. The relevance of the dialogical self to developmental psychology was discussed in a special issue of New Directions for Child and Adolescent Development (2012). The application of the dialogical self in educational settings was presented in a special issue of the Journal of Constructivist Psychology (2013). Evaluation Since its first inception in 1992, DST is discussed and evaluated, particularly at the biennial International Conferences on the Dialogical Self and in the International Journal for Dialogical Science. Some of the main positive evaluations and main criticisms are summarized here. On the positive side, many researchers appreciate the breadth and the integrative character of the theory. As the above review of applications demonstrates, there is a broad range of fields in psychology and other disciplines in which the theory has received interests from thinkers, researchers and practitioners. The breadth of interest is also reflected by the range of scientific journals that have devoted special issues to the theory and its implications. The theory has the potential to bring together scientists and practitioners from a variety of countries, continents and cultures. The Fifth International Conference on the Dialogical Self in Cambridge, United Kingdom attracted 300 participants from 43 countries. The conference focused primarily on DST, and dialogism as a related field. However, by focusing on dialogue, dialogical self goes beyond the post-modernism idea of the decentralization of the self and the notion of fragmentation. Recent work by John Rowan has resulted in the publication of a book by him entitled - 'Personification: Using the Dialogical Self in Psychotherapy and Counselling' published by Routledge. The book shows how to apply the concepts by those working in the therapeutic field. Criticism The theory and its applications have also received several criticism. Many researchers have noted a discrepancy between theory and research. Certainly, more than most post-modernist approaches, the theory has instigated a variety of empirical studies and some of its main tenets are confirmed in experimental social-psychological research. Yet, the gap between theory and research still exists. Closely related to this gap, there is the lack of connection between dialogical self research and mainstream psychology. Although the theory and its applications have been published in mainstream journals like Psychological Bulletin and the American Psychologist, it has not yet led to the adoption of the theory as a significant development in mainstream (American) psychology. Apart from the theory-research gap, one of the additional reasons for the lacking connection with mainstream research may be the fact that interest in the notion of dialogue, central in the history of philosophy since Plato, is largely neglected in psychology and other social sciences. Another disadvantage of the theory is that it lacks a research procedure that is sufficiently common to allow for the exchange of research data among investigators. Although different research tools have been developed (see the above review of assessment and research methods), none of them are used by a majority of researchers in the field. Investigators often use different research tools which lead to a considerable richness of information but, at the same time, create a stumbling block for the comparison of research data. It seems that the breadth of the theory and the richness of its applications have a shadowy side in the relative isolation of research in the DST subfields. Other researchers find the scientific work done thus far to be of a too verbal nature. While the theory explicitly acknowledges the importance of pre-linguistic, non-linguistic forms of dialogue, the actual research is typically taking place on the verbal level with the simultaneous neglect of the non-verbal level (for a notable exception cultural-anthropological research on shape-shifting). Finally, some researchers would like to see more emphasis on the bodily aspects of dialogue. Up till now the theory has focused almost exclusively on the transcendence of the self-other dualism, as typical of the modern model of the self. More work should be done on the embodied nature of the dialogical self (for the role of the body in connection with emotions). See also Dialectic process vs. dialogic process Dialogical analysis Egalitarian dialogue Internal discourse Philosophy of dialogue References Further reading H.J.M. Hermans, The Dialogical Self in Psychotherapy, limited free access H.J.M. Hermans, A. Hermans-Konopka, Dialogical Self Theory: Positioning and Counter-Positioning in a Globalizing Society, limited free access J. Rowan, Personification: Using the Dialogical Self in Psychotherapy and Counselling, limited free access Hermans, H.J.M., & Gieser, T. (Eds.) (2012). Handbook of Dialogical Self Theory. Cambridge, UK: Cambridge University Press. . Hermans, H.J.M. (2012). Between dreaming and recognition seeking: The emergence of dialogical self theory. Lanham: Maryland: University Press of America. . Assessing and Stimulating a Dialogical Self in Groups, Teams, Cultures, and Organizations. Ed. by H.Hermans. Springer, 2016. External links Hubert Hermans website International Society for Dialogical Science website International Journal for Dialogical Science website 1st International Conference on the Dialogical Self 5th International Conference on the Dialogical Self 6th International Conference on the Dialogical Self 9th International Conference on the Dialogical Self Psychological theories Self Psychological concepts
0.773219
0.976884
0.755346
DOTMLPF
DOTMLPF (pronounced "Dot-MiL-P-F") is an acronym for doctrine, organization, training, materiel, leadership and education, personnel, and facilities. It is used by the United States Department of Defense and was defined in the Joint Capabilities Integration Development System, or JCIDS Process as the framework to design what administrative changes and/or acquisition efforts would fill a capability need required to accomplish a mission. Because combatant commanders define requirements in consultation with the Office of the Secretary of Defense (OSD), they are able to consider gaps in the context of strategic direction for the total US military force and influence the direction of requirements earlier in the acquisition process, in particular, materiel. It also serves as a mnemonic for staff planners to consider certain issues prior to undertaking a new effort. Doctrine Organization Training Materiel Leadership Personnel Facilities Here is an example of how DOTMLPF would be interpreted in the military context: Doctrine: the way they fight, e.g., emphasizing maneuver warfare combined air-ground campaigns. Organization: how they organize to fight; divisions, air wings, Marine-Air Ground Task Forces (MAGTFs), etc. Training: how they prepare to fight tactically; basic training to advanced individual training, various types of unit training, joint exercises, etc. Materiel: all the “stuff” necessary to equip our forces that DOES NOT require a new development effort (weapons, spares, test sets, etc that are “off the shelf” both commercially and within the government) Leadership and education: how they prepare their leaders to lead the fight from squad leader to 4-star general/admiral; professional development. Personnel: availability of qualified people for peacetime, wartime, and various contingency operations Facilities: real property; installations and industrial facilities (e.g. government owned ammunition production facilities) that support the forces. The idea is to fix the capability gap, and CJCSI 3170.01G – Joint Capabilities Integration and Development System, 1 March 2009, is the one governing instruction that encompasses both materiel (requiring new defense acquisition programs) and non-materiel (not requiring new defense acquisition program) solutions. The Defense Acquisition University Glossary gives the following definitions. Material: Elements, constituents, or substances of which something is composed or can be made. It includes, but is not limited to, raw and processed material, parts, components, assemblies, fuels, and other items that may be worked into a more finished form in performance of a contract. Materiel: Equipment, apparatus, and supplies used by an organization or institution. Material specification: Applicable to raw material (chemical compound), mixtures (cleaning agents, paints), or semi-fabricated material (electrical cable, copper tubing) used in the fabrication of a product. Normally, a material specification applies to production, but may be prepared to control the development of a material. Materiel solution: A new item (including ships, tanks, self-propelled weapons, aircraft, etc., and related spares, repair parts, and support equipment, but excluding real property, installations, and utilities), developed or purchased to satisfy one or more capability requirements (or needs) and reduce or eliminate one or more capability gaps. DOTMLPF-P During the US Army's process of developing and fielding laser Directed Energy-Maneuver Short-Range Air Defense (DE-MSHORAD) on Strykers, the Army Rapid Capabilities and Critical Technologies Office (RCCTO) has established an "Octagon"— a stakeholder forum for doctrine, organization, training, materiel, leadership and education, personnel, facilities, and policy. Similar acronyms NATO uses a similar acronym, DOTMLPF-I, the "I" standing for "Interoperability": the ability to be interoperable with forces throughout the NATO alliance. NATOs AJP-01 Allied Joint Doctrine (2022) describes interoperability as the "ability of NATO, other political departments, agencies and, when appropriate, forces of partner nations to act together coherently, effectively and efficiently to achieve Allied tactical, operational and strategic objectives". Interoperability can be achieved withen the three dimensions of interoperability; the technical, the procedural and the human dimension. NATOs Cability Development (CAPDEV) is part of the NATO Defence Planning Process (NDPP), where DOTMLPFI is used as a framework to test and develop these concepts and capabilities. While developing a concept, NATO describes two orientations; either to transform or find a solution. NATO CD&E Handbook (2021) describes using the DOTMPLFI framework and the lines of development when trying to find the solution. The Norwegian Defence Research Establishment (NDRE) has established a dedicated innovation center for all the stakeholders within the Norwegian defence sector, called ICE worx. ICE worx's modell for rapid innovation is used when modern technology with a high technical readiness level is available to find new solutions with only some minor needs for development. To ensure operativ effects of the innovation is achieved, the rapid innovation modell uses the DOTMLPFI framework to identify how the development and experimentation of new technology will effect the different factors. DOTMLPFI-IE Norwegian Armed Forces (NAF) uses the DOTMLPFI framework to develop a total project plan (TPP) in investment processes. The TPP was formalized in the NAF in 2018. For the procurement of materiel, which is done by the Norwegian Defence Material Agency (NDMA), they use the PRINSIX project model, based on the PRINCE2 method. The TPP is devolved in close coordination with the project plan developed by NDMA, and where the project plan covers how the materiel procurement is managed, the TPP covers all the factors needed for the procurement to reach the business goals and achieve the operative benefits of the investment. However, to ensure all factors in materiell investments are taken into consideration before NAF are ready to actually start using the materiel and equipment, they added an I for Information systems and an E for Economy. Information systems include communication systems, battle management systems, radios or information security. Often Information systems are not part of the specific Materiell investment project, but is regarded as Government Furnished Equipment (GFE). Economy was added to the DOTMLPFI-IE because NAF in 2015 (and in 2024) got a rapport from the NDRE stating that "Operating costs are given little weight in investment decisions". Economy as an own factor in the TPP is to ensure the different processes both before, during and after the materiel procurement is done, is planned with in order to mitigate risks relating to operating costs. External links DOTmLPF - P analysis Acquipedia entry on DOTmLPF Current JCIDS Manual and CJCSI 3170.01 at DAU United States Department of Defense Military terminology of the United States
0.763032
0.989903
0.755328
Poor Charlie's Almanack
Poor Charlie's Almanack is a collection of speeches and talks by Charlie Munger, compiled by Peter D. Kaufman. First published in 2005, it was released in an expanded edition three years later. It was republished in 2023 by Stripe Press, shortly before Munger's death. Overview Charlie Munger was the long-serving vice-chairman of Berkshire Hathaway. This book brings together his investing thoughts beyond his famous statement "I have nothing to add." Munger admired Benjamin Franklin, and the book's title is a tribute to Franklin's Poor Richard's Almanack. Net proceeds from sales of the book go to the Munger Research Center at the Huntington Library in San Marino, California. Contents Munger propounds the 'Multiple Mental Models' approach to decision making. This collection of 'Big Ideas from Big Disciplines' contains an iconoclastic checklist for decision-making. The book is written in an unconventional style. The ideas are not listed in an orderly fashion but just touched upon lightly, with pictures given alongside - in line with Munger's idea to "make the mind reach out to the idea" thereby increasing the idea's retentiveness in memory. The pictures serve to make the idea vivid by increasing their retentiveness and add a bit of geeky humor to the book. The "Lollapalooza Effect" is Munger's term for the confluence of multiple biases; according to Munger, the tendency toward extremism results from such confluences. These biases often occur at either conscious or subconscious level, and at both microeconomic and macroeconomic scales. The 25 Cognitive Biases is also explained in the book. Munger explains why people are so psychologically flawed, leading to mistakes in decision making. Eleven talks The book includes some talks given by Munger: Harvard-Westlake School Commencement June 13, 1986 (Read online) "A Lesson in Elementary, Worldly Wisdom as It Relates to Investment Management and Business", University of Southern California Marshall School of Business, April 14, 1994 (Read online) "A Lesson in Elementary, Worldly Wisdom, Revisited", Stanford Law School, April 19, 1996 "Practical Thought About Practical Thought?", July 20, 1996 "The Need for More Multidisciplinary Skills from Professionals: Educational Implications Harvard Law School Class of 1948, April 24, 1998 "Investment Practices of Leading Charitable Foundations", Foundation Financial Officers Group, October 14, 1998 Breakfast Meeting of the Philanthropy Roundtable, November 10, 2000 "The Great Financial Scandal of 2003", Summer 2000 "Academic Economics: Strengths and Faults after Considering Interdisciplinary Needs", Herb Kay undergraduate lecture University of California, Santa Barbara, October 3, 2003 The University of Southern California (USC) Gould School of Law Commencement Address, May 13, 2007 (added in the third edition) The Psychology of Human Misjudgment, updated in 2005 (Read the original 1995 version online). Reviews In November 2005, Kiplinger's Newsletter wrote "Munger, 81, has always been media shy. That changed when Peter Kaufman compiled Munger's writing and speeches in a new book, Poor Charlie's Almanack: The Wit and Wisdom of Charles T. Munger " In August 2006, The Motley Fool wrote: "With 512 pages, there is something for everyone, and Poor Charlie's Almanack is an impressive and thorough tribute to one of the brightest, most pragmatic, and iconoclastic investment minds ever." References External links Official website Finance books American non-fiction books 2005 non-fiction books Speeches
0.766244
0.985742
0.755319
Reciprocal teaching
Reciprocal teaching is a powerful instructional method designed to foster reading comprehension through collaborative dialogue between educators and students. Rooted in the work of Annemarie Palincsar, this approach aims to empower students with specific reading strategies, such as Questioning, Clarifying, Summarizing, and Predicting, to actively construct meaning from text. Research indicates that reciprocal teaching promotes students' reading comprehension by encouraging active engagement and critical thinking during the reading process. By engaging in dialogue with teachers and peers, students deepen their understanding of text and develop essential literacy skills. Reciprocal teaching unfolds as a collaborative dialogue where teachers and students take turns assuming the role of teacher (Palincsar, 1986). This interactive approach is most effective in small-group settings, facilitated by educators or reading tutors who guide students through the comprehension process. In practice, reciprocal teaching empowers students to become active participants in their own learning, fostering a sense of ownership and responsibility for their academic success. By engaging in meaningful dialogue and employing specific reading strategies, students develop the skills necessary to comprehend and analyze complex texts effectively. Reciprocal teaching is best represented as a dialogue between teachers and students in which participants take turns assuming the role of teacher. Reciprocal teaching stands as a valuable tool for educators seeking to enhance students' reading comprehension skills. By fostering collaboration, critical thinking, and active engagement, this approach equips students with the tools they need to succeed academically and beyond. Enhancing Reading Comprehension through Reciprocal Teaching Reciprocal teaching is an evidence-based instructional approach designed to enhance reading comprehension by actively engaging students in four key strategies: predicting, clarifying, questioning, and summarizing. Coined as the "fab four" by Oczkus, these strategies empower students to take an active role in constructing meaning from text. Predicting involves students making educated guesses about the content of the text before reading, activating prior knowledge and setting the stage for comprehension. Clarifying entails addressing areas of confusion or uncertainty by asking questions and seeking clarification from the teacher or peers. Questioning involves students generating questions about the text to deepen understanding and promote critical thinking. Summarizing requires students to synthesize key information from the text and articulate it in their own words, reinforcing comprehension and retention. Throughout the reciprocal teaching process, teachers provide support and guidance to students, reinforcing their responses and facilitating meaningful dialogue. This collaborative approach fosters a supportive learning environment where students feel empowered to actively engage with text and construct meaning collaboratively. Research suggests that reciprocal teaching is effective in improving reading comprehension across diverse student populations. By incorporating active engagement, dialogue, and metacognitive strategies, reciprocal teaching equips students with the skills they need to comprehend and analyze complex texts effectively. Role of reading strategies Reciprocal teaching is an amalgamation of reading strategies that effective readers are thought to use. As stated by Pilonieta and Medina in their article "Reciprocal Teaching for the Primary Grades: We Can Do It, Too!", previous research conducted by Kincade and Beach (1996 ) indicates that proficient readers use specific comprehension strategies in their reading tasks, while poor readers do not. Proficient readers have well-practiced decoding and comprehension skills which allow them to proceed through texts somewhat automatically until some sort of triggering event alerts them to a comprehension failure. This trigger can be anything from an unacceptable accumulation of unknown concepts to an expectation that has not been fulfilled by the text. Whatever the trigger, proficient readers react to a comprehension breakdown by using a number of strategies in a planned, deliberate manner. These "fix-up" strategies range from simply slowing down the rate of reading or decoding, to re-reading, to consciously summarizing the material. Once the strategy (or strategies) has helped to restore meaning in the text, the successful reader can proceed again without conscious use of the strategy. All readers—no matter how skilled—occasionally reach cognitive failure when reading texts that are challenging, unfamiliar, or "inconsiderate"—i.e. structured or written in an unusual manner. Poor readers, on the other hand, do not demonstrate the same reaction when comprehension failure occurs. Some simply do not recognize the triggers that signal comprehension breakdown. Others are conscious that they do not understand the text, but do not have or are unable to employ strategies that help. Some use maladaptive strategies (such as avoidance) that do not aid in comprehension. Mayer notes in his paper on Learning Strategies that reciprocal teaching can help even novice learners become more adept at utilizing learning strategies and furthering their understanding of a subject. Mayer also notes that the reciprocal teaching process gives the students the chance to learn more by having the teachers as role models, and that the reciprocal teaching process gives beginners in an academic field a chance to learn from the experts by taking turns leading the class. Strategies Reciprocal teaching, a cognitive strategy instruction approach introduced by Palincsar and Brown, offers a structured framework for guiding students through comprehension processes during reading. This approach aims to equip students with specific strategies to prevent cognitive failure and enhance comprehension. The four fundamental strategies identified by Palincsar and Brown are: Questioning: Encourages students to generate questions about the text, promoting active engagement and deeper understanding. This strategy involves prompting students to pose questions about the content, characters, plot, or any unclear aspects of the text. Clarifying: Helps students address confusion or gaps in understanding by identifying and resolving obstacles to comprehension. Students are encouraged to clarify unfamiliar vocabulary, concepts, or complex passages by using context clues, dictionaries, or seeking additional information. Summarizing: Involves condensing the main ideas and key points of the text into concise summaries. Students learn to identify the most important information and organize it in their own words, fostering comprehension and retention. Predicting: Encourages students to make educated guesses about what might happen next in the text based on prior knowledge, context clues, and textual evidence. This strategy engages students in active prediction-making, promoting anticipation and deeper engagement with the text. Reciprocal teaching operates on the principle of "guided participation," where the teacher initially models the strategies, then gradually shifts responsibility to the students as they become more proficient. The process typically involves structured interactions between the teacher and small groups of students, with the teacher assuming the role of facilitator or "leader." For instance, the leader might begin by modeling each strategy explicitly, demonstrating how to generate questions, clarify confusion, summarize key points, and make predictions based on the text. As students gain proficiency, they take turns assuming the role of leader within their small group, practicing and refining the strategies with guidance and feedback from their peers and the teacher. By systematically integrating these comprehension-fostering and comprehension-monitoring strategies into their reading practices, students develop metacognitive awareness and become more adept at regulating their understanding of texts. This not only enhances their comprehension skills but also empowers them to become more independent and strategic readers over time. Predicting In the prediction phase of reciprocal teaching, readers actively engage in synthesizing their prior knowledge with the information gleaned from the text, projecting what might unfold next in the narrative or what additional insights they might encounter in informational passages (Doolittle et al., 2006). This predictive process not only encourages students to anticipate forthcoming events or information but also serves as a mechanism for them to confirm or refute their own hypotheses about the text's direction and the author's intentions. Drawing upon the text's structure and their own cognitive schemas, students formulate hypotheses that guide their reading experience, providing a purposeful framework for comprehension. As Williams emphasizes, while predictions need not be infallible, they should be articulated clearly, allowing for ongoing dialogue and reflection within the reciprocal teaching framework. During the prediction phase, the Predictor within the group may offer conjectures about the forthcoming content in informational texts or the progression of events in literary works. This collaborative process encourages students to actively contribute to the discussion and consider multiple perspectives. The reciprocal teaching cycle continues with subsequent sections of text, wherein students cyclically engage in reading, questioning, clarifying, summarizing, and predicting. While these core strategies form the foundation of reciprocal teaching, variations and expansions have been introduced by practitioners to accommodate diverse reading materials and learning objectives. For instance, additional reading strategies such as visualizing, making connections, inferencing, and questioning the author have been integrated into the reciprocal teaching format to further enhance students' comprehension and critical thinking skills. By incorporating a range of strategies tailored to the specific text and learning objectives, reciprocal teaching provides a flexible and comprehensive approach to literacy instruction. The sequence of reading, questioning, clarifying, summarizing, and predicting is then repeated with subsequent sections of text. Different reading strategies have been incorporated into the reciprocal teaching format by other practitioners. Some other reading strategies include visualizing, making connections, inferencing, and questioning the author. Questioning In the questioning phase of reciprocal teaching, readers engage in metacognitive reflection by monitoring their own comprehension and actively interrogating the text to deepen their understanding. This process of self-awareness fosters a deeper engagement with the material and facilitates the construction of meaning. Central to the questioning strategy is the identification of key information, themes, and concepts within the text that merit further exploration. By discerning what is central or significant, readers can formulate questions that serve as checkpoints for their comprehension and guide their reading process. The questions posed during the questioning phase serve multiple purposes, including clarifying unclear or puzzling parts of the text, making connections to previously learned concepts, and eliciting deeper insights. These questions prompt readers to actively interact with the text, encouraging them to consider its implications and relevance within a broader context. Within the reciprocal teaching framework, the Questioner assumes the role of guiding the discussion by posing questions about the selection. This collaborative exchange allows students to collectively navigate through the text, addressing areas of confusion and probing for deeper understanding. Questioning provides a structured context for exploring the text more deeply and constructing meaning. By actively interrogating the text through a series of thoughtful questions, readers develop a more nuanced understanding of its content and significance. As students engage in reciprocal teaching, the questioning phase empowers them to take ownership of their learning process and develop essential critical thinking skills that extend beyond the confines of the classroom. Through the iterative process of questioning and exploration, readers cultivate a deeper appreciation for the complexities of text and become more proficient and independent learners Clarifying In the reciprocal teaching framework, the clarifying strategy serves as a targeted approach to address decoding challenges, unfamiliar vocabulary, and comprehension obstacles. By equipping students with specific decoding techniques and fix-up strategies, the clarifying phase empowers them to overcome difficulties and enhance their understanding of the text. Central to the clarifying strategy is the identification and remediation of unclear or unfamiliar aspects of the text, including awkward sentence structures, unfamiliar vocabulary, ambiguous references, and complex concepts. By pinpointing these obstacles, students can employ various remedial actions, such as re-reading passages, using contextual clues, or consulting external resources like dictionaries or thesauruses. The clarifying phase not only fosters metacognitive awareness but also motivates students to actively engage in the process of comprehension repair. By recognizing and addressing areas of confusion, students develop a sense of agency in navigating through challenging texts and constructing meaning. Within the reciprocal teaching model, the Clarifier assumes the responsibility of addressing confusing parts of the text and providing responses to the questions posed by the group. This collaborative exchange encourages students to share their insights and perspectives, fostering a deeper understanding of the text through collective problem-solving. The clarifying strategy promotes the development of strategic reading habits, such as chunking text for better comprehension, utilizing spelling patterns for decoding, and employing fix-up strategies to maintain concentration. By equipping students with these skills, reciprocal teaching empowers them to become more proficient and confident readers. Through the clarifying phase, students not only enhance their comprehension skills but also cultivate a sense of ownership over their learning process. By actively engaging in the identification and resolution of comprehension obstacles, students develop resilience and adaptability in their approach to reading, laying the groundwork for lifelong learning and academic success. In short, The Clarifier will address confusing parts and attempt to answer the questions that were just posed. Summarizing In reciprocal teaching, the summarizing strategy plays a pivotal role in helping students distill the essential information, themes, and ideas from a text into a concise and coherent statement. By discriminating between important and less-important information, students engage in a process of synthesis that enhances their comprehension and retention of the material. At its core, summarizing involves the identification and integration of key elements of a text into a unified whole. This process requires students to extract the central ideas, main events, and significant details, while omitting extraneous or peripheral information. Through summarization, students create a framework for understanding the overarching message or purpose of the text. Summarizing can occur at various levels of granularity, ranging from individual sentences to entire passages or chapters. Regardless of the scope, the goal remains consistent: to capture the essence of the text in a succinct and accessible manner. This not only reinforces students' understanding of the material but also facilitates their ability to communicate the main ideas to others. Within the reciprocal teaching framework, the Summarizer takes on the responsibility of articulating the main idea of the text using their own words. This process encourages students to engage actively with the text, distilling complex information into manageable chunks and fostering a deeper comprehension of the material. Summarizing provides students with a valuable tool for organizing and structuring their thoughts about a text. By encapsulating the key points in a concise statement, students develop a clearer understanding of the text's structure and significance, enabling them to make connections and draw inferences more effectively. Through repeated practice, students refine their summarization skills, moving from summarizing at the sentence level to paragraphs and eventually entire texts. This iterative process not only strengthens their comprehension abilities but also builds their confidence as independent readers and critical thinkers. In short, The Summarizer will use his/her own words to tell the main idea of the text. This can happen anywhere in the story, and it should happen often for those students who are at-risk. It can happen first at sentence level, then paragraphs, then to whole text. Instructional format Reciprocal teaching follows a dialogic/dialectic process. Derber wrote that there were two reasons for choosing dialogue as the medium. First, it is a language format with which children are familiar (as opposed to writing, which may be too difficult for some struggling readers). Second, dialogue provides a useful vehicle for alternating control between teacher and students in a systematic and purposeful manner. Reciprocal teaching illustrates a number of unique ideas for teaching and learning and is based on both developmental and cognitive theories. The strategies embedded in reciprocal teaching represent those that successful learners engage in while interacting with text. They are thought to encourage self-regulation and self-monitoring and promote intentional learning. Reciprocal teaching also follows a very scaffolded curve, beginning with high levels of teacher instruction, modeling, and input, which is gradually withdrawn to the point that students are able to use the strategies independently. Reciprocal teaching begins with the students and teacher reading a short piece of text together. In the beginning stages, the teacher models the "Fab Four" strategies required by reciprocal teaching, and teacher and students share in conversation to come to a mutual agreement about the text. The teacher then specifically and explicitly models his or her thinking processes out loud, using each of the four reading strategies. Students follow the teacher's model with their own strategies, also verbalizing their thought processes for the other students to hear. Over time, the teacher models less and less frequently as students become more adept and confident with the strategies. Eventually, responsibility for leading the small-group discussions of the text and the strategies is handed over to the students. This gives the teacher or reading tutor the opportunity to diagnose strengths, weaknesses, misconceptions, and to provide follow-up as needed. Reciprocal teaching encompasses several techniques involving the who, what, and where, of learning: What is learned are cognitive strategies for reading comprehension rather than specific facts and procedures. The teaching focuses on how to learn rather than what to learn. Learning of the cognitive strategies occurs within real reading comprehension tasks rather than having each strategy taught in isolation. Learning takes place in an order, rather than learning everything separately. Students learn as apprentices within a cooperative learning group that is working together on a task. The students are learning through themselves, and through the others in their group. Vygotsky connection Reciprocal teaching aligns closely with Lev Vygotsky's theories on the interconnectedness of language, cognition, and learning, as outlined in his seminal work "Thought and Language". Vygotsky emphasized the profound connection between oral language development and cognitive growth, highlighting the pivotal role of social interactions in shaping individuals' thinking processes. This perspective finds additional support in the concept of "Learning by Teaching", where learners solidify their understanding of a subject matter by teaching it to others. Central to Vygotsky's framework is the notion of the zone of proximal development (ZPD)., which represents the space between what learners can achieve independently and what they can accomplish with guidance and support from more knowledgeable individuals. Reciprocal teaching operates within this zone by providing structured support and scaffolding to help students bridge the gap between their current abilities and the desired comprehension level. This process mirrors Vygotsky's idea of scaffolding, wherein temporary assistance is provided to learners as they engage in tasks just beyond their current level of competence, with the ultimate goal of fostering independent mastery. The iterative nature of reciprocal teaching, characterized by the gradual reduction of teacher support as students gain proficiency, reflects the principles of cognitive apprenticeship proposed by Collins, Brown, and Newman. This approach involves modeling, coaching, and gradually fading support, allowing learners to internalize and apply comprehension strategies autonomously. Thus, reciprocal teaching embodies Vygotsky's emphasis on the social and collaborative nature of learning, providing a framework for meaningful interaction and cognitive growth within the educational context. Current Uses The reciprocal teaching model has been in use for the past 20 years and has been adopted by a number of school districts and reading intervention programs across the United States and Canada. It has also been used as the model for a number of commercially produced reading programs such as Soar to Success, Connectors, Into Connectors. Unfortunately, according to Williams, most students and teachers in this country have "never even heard of it". Available from Global Ed in New Zealand is the Connectors and Into Connectors Series written by Jill Eggleton. These two series have both non fiction and fiction text. Abrams Learning Trends publishers Key Links Peer Readers by Jill Eggleton (2016) Reciprocal teaching is also being adopted and researched in countries other than the United States. For example, Yu-Fen Yang of Taiwan conducted a study to develop a reciprocal teaching/learning strategy in remedial English reading classes. Yang's study concluded that "...students expressed that they observed and learned from the teacher’s or their peers’ externalization of strategy usage. Students’ reading progress in the remedial instruction incorporating the RT system was also identified by the pre- and post-tests. This study suggests that there may be benefits for teachers in encouraging students to interact with others in order to clarify and discuss comprehension questions and constantly monitor and regulate their own reading". In a 2008 study presented effective implementation of reciprocal teaching to students diagnosed with mild to moderate forms of disability. Within this group, ten percent of students had difficulty in learning due to Down Syndrome. The average of the participants was around eighteen years of age. The researchers, Miriam Alfassi, Itzhak Weiss, and Hefziba Lifshitz, developed a study based on Palincsar and Brown's design of reciprocal teaching for students who were considered academically too low for the complex skills of reading comprehension. The study compared two styles of teaching, remediation/direct instruction to Palincsar/Brown reciprocal teaching. After twelve weeks of instruction and assessments, reciprocal teaching was found to produce a greater success rate in improving the literacy skills in the participants with mild to moderate learning disabilities. After the study was completed, researchers recommended reciprocal teaching so that students are taught in an interactive environment that includes meaningful and connected texts. This research for the European Journal of Special Needs Education, promotes reciprocal teaching for its structure in dialogues and how students learn to apply those dialogues based on the reading taking place in instruction. Research in the United States has also been conducted in on the use of reciprocal teaching in primary grades. Pilonieta and Medina conducted a series of procedures to implement their version of reciprocal teaching in elementary school students. The researchers adopted an age-appropriate model for reciprocal teaching and called it "Reciprocal Teaching for the Primary Grades", or RTPG. Their research shows that even in younger children, reciprocal teaching apparently benefited the students and they showed retention of the RTPG when re-tested six months later. Reciprocal teaching has been heralded as effective in helping students improve their reading ability in pre-post trials or research studies. Further trials employing Reciprocal Teaching have consistently indicated the technique promotes reading comprehension as measured on standardized reading tests. Recent research continues to underscore the efficacy of reciprocal teaching in enhancing reading comprehension skills. For instance, a study by Lee and colleagues investigated the impact of reciprocal teaching on adolescent readers' comprehension in a digital learning environment. They found that students who received reciprocal teaching instruction demonstrated significantly higher levels of comprehension compared to those in traditional instruction settings, highlighting the adaptability of the strategy to modern educational contexts. A longitudinal study by Johnson et al. followed a cohort of elementary school students over three years and assessed the sustained effects of reciprocal teaching on their reading comprehension abilities. The researchers found that students who participated in reciprocal teaching interventions not only showed immediate improvements but also maintained higher levels of comprehension skills over time, indicating the long-term benefits of the approach. Additionally, a meta-analysis by Wang and Smith (2024) synthesized findings from multiple studies on reciprocal teaching and its effects on various aspects of reading comprehension, including vocabulary acquisition and critical thinking skills. Their analysis revealed consistent positive effects across diverse student populations and instructional settings, reaffirming reciprocal teaching's status as a robust instructional method for improving reading comprehension outcomes. These recent studies contribute to the growing body of evidence supporting reciprocal teaching as a versatile and effective approach for fostering reading comprehension skills among students of all ages and ability levels. References External links Palincsar & Brown: Reciprocal Teaching of Comprehension-Fostering and Comprehension-Monitoring Activities http://www.readingrockets.org/strategies/reciprocal_teaching/ http://www.powershow.com/view/1cda1-MjYyO/Reciprocal_Teaching_Teaching_Cognitive_Strategies_In_Context_Through_Dialogue_To_Enhance_Comprehen_flash_ppt_presentation Learning methods Learning to read Reading (process) Special education Pedagogy
0.775205
0.974337
0.755311
Michelangelo phenomenon
The Michelangelo phenomenon is an interpersonal process observed by psychologists in which close, romantic partners influence or 'sculpt' each other. Over time, the Michelangelo effect causes individuals to develop towards what they consider their "ideal selves". This happens because their partner sees them and acts around them in ways that promote this ideal. The phenomenon is referred to in contemporary marital therapy. Recent popular work in couples therapy and conflict resolution points to the importance of the Michelangelo phenomenon. Diana Kirschner reported that the phenomenon was common among couples reporting high levels of marital satisfaction. It is the opposite of the Blueberry phenomenon "in which interdependent individuals bring out the worst in each other." The Michelangelo phenomenon is related to the looking-glass self concept introduced by Charles Horton Cooley in his 1902 work Human Nature and the Social Order. This phenomenon has various positive effects for both the individual and the couple. Various factors impact components and processes involved in the phenomenon. Description of the model Overview The Michelangelo phenomenon describes a three step process where close partners shape each other so as to bring forth one another's ideal selves. This ideal self is conceptualized as a collection of an individual's "dreams and aspirations" or "the constellation of skills, traits, and resources that an individual ideally wishes to acquire." These span different domains, such as one's profession, relationship, health, and personality. An example of an ideal self is one that includes "completing medical school, becoming more sociable, or learning to speak fluent Dutch." This is different from the actual self, which consists of attributes the self currently possesses and the ought self, which consists of attributes the self feels obligated to possess. Note that in this article, the "self" refers to a specific, target individual. This phenomenon is significant given that the self does not experience growth in complete isolation of the influence of others. Yet, prior to 1999, much research on self growth consisted of examining individual processes. Research into the influence of others was neglected, even though those with whom the self interacts most regularly can lead to more constant, stable changes in disposition and behavior. The general topic of growth is itself worth studying given that people are motivated to work toward it. The three core parts of the phenomenon are as follows: partner perceptual affirmation, partner behavioral affirmation, and self-movement toward the ideal self. Components of the model Partner affirmation appears in the model as two different parts. Partner affirmation names how partners bring about aspects of the ideal self from the self. Partner perceptual affirmation describes how a partner's views of the self aligns with the self's view of their ideal self. A partner will show greater partner perceptual affirmation if they believe the self to be, or to be capable of being, the ideal self. In other words, Jay will show more perceptual affirmation if he sees his partner Kaylee, whose ideal self includes being competent as piano, as actually competent at piano or as capable of being competent at piano. Partner behavioral affirmation describes how a partner acts in a way that aligns with the ideal self. A partner, such as Jay, will show more partner behavioural affirmation if they act in a way such that Kaylee's ideal self can come forward, such as if he drives Kaylee to piano lessons. Self-movement toward the ideal self describes how the distance between the self and ideal self closes. Kaylee will experience self-movement toward the ideal self when she becomes more competent at piano. Note that both perceptual and behavioural aspects of partner affirmation can take place consciously or unconsciously. For example, someone with a partner who wants to be more sociable may consciously encourage them to spend more time with their friends, in an effort to help them meet this goal. This is conscious behavioural affirmation. On the other hand, knowing that sociability is a goal of their partner, someone may feel less apprehension when organising a social gathering in their space. This would inadvertently give the partner an opportunity to socialise and is an example of unconscious behavioural affirmation. These three components come together under two hypotheses which are part of the Michelangelo phenomenon. The partner affirmation hypothesis says that the more a partner's view of the self aligns with the ideal self, the more that partner will act in a way to bring out that ideal self. For example, the more Jay views Kaylee as being competent at piano, the more he will do things to elicit that view by way of positively enforcing her piano achievements or supporting her piano lessons. The movement toward ideal hypothesis says that the more the partner behaves in a way aligned with the ideal self, the more the self will become more like their ideal self. The more Jay acts in a way that aligns with Kaylee's ideal self of being competent at piano, the more Kaylee will increasingly become competent at piano. Variations in sculpting and related phenomena An affirming partner may shape someone through a series of selection mechanisms: Retroactive selection in which an individual reinforces behaviours of their partner by punishing or rewarding them Preemptive selection wherein an individual initiates an interaction that promotes certain behaviours in their partner Situation selection where an individual creates a situation in which the elicitation of desired partner behaviours is probable To add to these three types, other more specific examples of ideal-self-affirming behaviors a partner can enact includes expressing approval of the self's efforts toward goals and offering support such as strategy improvement tips. Note that not all of a partner's acts to reinforce certain qualities counts as affirming or, to be more specific, ideal-self-affirming. Exploring related phenomena can further clarify what partner affirmation is not. Partner enhancement is when a partner acts in a way that is more so positive than reflective of objective reality. For example, Jay acts toward Kaylee as if she is the best piano player, even if the average piano instructor would rate her as simply decent at piano. There is partner verification, which involves the partner reinforcement of qualities that the target, or self, believes to be true already. An example would be if Jay laughs at Kaylee's jokes and, subtly, reinforces the conception she has of herself as a funny person. Note that on another part of this spectrum, a partner may not affirm the self's ideal and may instead reinforce an ideal that does not belong to the self or that is the opposite of the self's ideal. There is, for example, the Pygmalion phenomenon, where the partner attempts to sculpt the target to align with their ideals rather than the target's ideals. For example, this would occur if Jay, who differently from Kaylee seeks to be a regular voter, behaves in a way to draw out that quality of consistent voting behavior in Kaylee. Movement away from the ideal self may occur for Kaylee if Jay supported, for example, Kaylee's rare endeavors in binge drinking, a high-risk behavior antithetical to her ideal self as a healthy person. Other ways an individual may disaffirm their partner is "by communicating indifference, pessimism, or disapproval, by undermining [their] ideal pursuits, or by affirming qualities that are antithetical to [their] ideal self." This disaffirmation may occur passively, in the failure to affirm, or actively, in disaffirmation. The metaphor The phenomenon is named after the Italian Renaissance painter, sculptor, architect, poet and engineer Michelangelo (1475-1564). Michelangelo "described sculpting as a process whereby the artist released a hidden figure from the block of stone in which it slumbered." The metaphor of chipping away at a block of stone to reveal the 'ideal form' is extended, in this context, to close relationships. According to the Michelangelo phenomenon, a person will be 'sculpted' into their self-conceived ideal form by their partner. The metaphor and term was first introduced by the US psychologist Stephen Michael Drigotas (et al.) in 1999. Michelangelo phenomenon effects Couple well being Drigotas et al. (1999) found support for their couple well being hypothesis, which states that greater self movement toward the ideal self is linked to greater functioning and health within the couple. Partner affirmation is generally beneficial to relationships as it increases perceived responsiveness, which increases the self's trust in their partner and the self's commitment. There is also a benefit Drigotas et al. (1999) found where, across four studies, individuals who helped sculpt their partners to resemble the partner's ideal selves experienced movement towards their own ideal self as well. With Jay and Kaylee, this might look like Jay experiencing becoming more like his ideal of being a supportive teammate the more he helps Kaylee attain her ideal self. Individual well being Drigotas found support that the Michelangelo phenomenon is strongly linked to personal well-being across varied dimensions such as life satisfaction, self esteem, and loneliness. The distance between our actual self, or current attributes, and ideal self impacts emotions such that a smaller distance engenders joy and a larger distance engenders emotions like sadness. Further, it is the specific aspect of partner behavioral affirmation that predicts personal well being, and not the general relationship satisfaction that comes about as an effect of processes in the Michelangelo phenomenon. Factors impacting the Michelangelo phenomenon Several different factors relating to attributes of either the individual (the self) and the individual's partner (the partner) contribute varying effects on various components of the phenomenon. Ideal similarity Ideal similarity can be defined as the alignment of a partner to the self's ideal self. Higher ideal similarity means there is a greater match between the partner's attributes and the ideal self's attributes. Higher ideal similarity is linked to higher partner affirmation, self movement toward the ideal self, and couple well being, vitality, adjustment. The effects of ideal similarity go beyond the realm of close partners as well. When individuals, or targets, were exposed to an experimental partner who was manipulated to resemble the targets' ideal selves, their perceptions of themselves and their partners increased such that targets thought themselves to be more capable of moving toward their ideal self and that partners were not only more affirming in the targets' minds, but were more attractive and generally more desirable interaction partners. Locomotion vs assessment orientations These two traits revolve around multiple parts of goal pursuit, including selection of the goal, evaluation of the goal, and pursuit of the goal.  Locomotion orientation describes the inclination of an individual to take action to reach their goals. Those more inclined toward locomotion tend to focus on quickly accomplishing realistic goals and tend to have more positive affect. Assessment orientation describes the inclination of an individual to focus more so on evaluation in their goal pursuit, rather than action. Those more inclined toward assessment tend to focus on dissecting goals, analyzing how to obtain those goals and tend to have more negative affect as well as more sensitive to how far they have to go to reach their goals. An individual's orientation impacts processes in the Michelangelo phenomenon. The orientation not only impacts the target's goal selection and pursuit, but how their partner affirms the target in their efforts and how the target affirms their own partner in their efforts. Specifically, individuals with locomotion orientations, as opposed to assessment orientations, seem more receptive to being sculpted; those with assessment orientations seem less receptive to being sculpted. As the partners who are sculpting, partners with locomotion as opposed to assessment orientations reported being more affirming of their partners' goal pursuits such that the targets were perceived to experience greater movement toward their ideal self. Other individual attributes Rusbult et al. (2005) speculate that there are three individual attributes which lead to increased self-movement toward the ideal self. These include insight or a solid construction of one's ideal and actual self, ability which includes skills and attributes like goal-relevant planning that are relevant to pursuit of the goal, and motivation to reach the goal, which includes commitment toward achieving the goal. Related phenomena Growth-as-hell model In contrast, it has been posited by Guggenbühl-Craig that it is precisely through disaffirmation that we grow and move towards our ideal-selves. This is because it is through disaffirmation that we are made aware of our flaws and can overcome them. Much like the Michelangelo phenomenon, this growth-as-hell model of self-growth and movement towards the ideal self is understood to occur most potently in close, romantic relationships. See also Symbolic interactionism William James George Herbert Mead References Interpersonal relationships 1999 introductions
0.773516
0.976445
0.755296
Rationalization (sociology)
In sociology, the term rationalization was coined by Max Weber, a German sociologist, jurist, and economist. Rationalization (or rationalisation) is the replacement of traditions, values, and emotions as motivators for behavior in society with concepts based on rationality and reason. The term rational is seen in the context of people, their expressions, and or their actions. This term can be applied to people who can perform speech or in general any action, in addition to the views of rationality within people it can be seen in the perspective of something such as a worldview or perspective (idea). For example, the implementation of bureaucracies in government is a kind of rationalization, as is the construction of high-efficiency living spaces in architecture and urban planning. A potential reason as to why rationalization of a culture may take place in the modern era is the process of globalization. Countries are becoming increasingly interlinked, and with the rise of technology, it is easier for countries to influence each other through social networking, the media and politics. An example of rationalization in place would be the case of witch doctors in certain parts of Africa. Whilst many locals view them as an important part of their culture and traditions, development initiatives and aid workers have tried to rationalize the practice in order to educate the local people in modern medicine and practice. Many sociologists, critical theorists and contemporary philosophers have argued that rationalization, falsely assumed as progress, has had a negative and dehumanizing effect on society, moving modernity away from the central tenets of Enlightenment. The founders of sociology had critical reaction to rationalization: Capitalism Rationalization formed a central concept in the foundation of classical sociology, particularly with respect to the emphasis the discipline placed – by contrast with anthropology – on the nature of modern Western societies. The term was presented by the profoundly influential German antipositivist Max Weber, though its themes bear parallel with the critiques of modernity set forth by a number of scholars. A rejection of dialectism and sociocultural evolution informs the concept. Weber demonstrated rationalization in The Protestant Ethic and the Spirit of Capitalism, in which the aims of certain Protestant Theologies, particularly Calvinism, are shown to have shifted towards rational means of economic gain as a way of dealing with their 'salvation anxiety'. The rational consequences of this doctrine, he argued, soon grew incompatible with its religious roots, and so the latter were eventually discarded. Weber continues his investigation into this matter in later works, notably in his studies on bureaucracy and on the classifications of authority. In these works he alludes to an inevitable move towards rationalization. Weber believed that a move towards rational-legal authority was inevitable. In charismatic authority, the death of a leader effectively ends the power of that authority, and only through a rationalized and bureaucratic base can this authority be passed on. Traditional authorities in rationalized societies also tend to develop a rational-legal base to better ensure a stable accession. (See also: Tripartite classification of authority) Whereas in traditional societies such as feudalism governing is managed under the traditional leadership of, for example, a queen or tribal chief, modern societies operate under rational-legal systems. For example, democratic systems attempt to remedy qualitative concerns (such as racial discrimination) with rationalized, quantitative means (for example, civil rights legislation). Weber described the eventual effects of rationalization in his Economy and Society as leading to a "polar night of icy darkness", in which increasing rationalization of human life traps individuals in an "iron cage" (or "steel-hard casing") of rule-based, rational control. Jürgen Habermas has argued that understanding rationalization properly requires going beyond Weber's notion of rationalization. It requires distinguishing between instrumental rationality, which involves calculation and efficiency (in other words, reducing all relationships to those of means and ends), and communicative rationality, which involves expanding the scope of mutual understanding in communication, the ability to expand this understanding through reflective discourse about communication, and making social and political life subject to this expanded understanding. The Holocaust, modernity and ambivalence For Zygmunt Bauman, rationalization as a manifestation of modernity may be closely associated with the events of the Holocaust. In Modernity and Ambivalence, Bauman attempted to give an account of the different approaches modern society adopts toward the stranger. He argued that, on the one hand, in a consumer-oriented economy the strange and the unfamiliar is always enticing; in different styles of food, different fashions and in tourism it is possible to experience the allure of what is unfamiliar. Yet this strange-ness also has a more negative side. The stranger, because he cannot be controlled and ordered, is always the object of fear; he is the potential mugger, the person outside of society's borders who is constantly threatening. Bauman's most famous book, Modernity and the Holocaust, is an attempt to give a full account of the dangers of these kinds of fears. Drawing upon Hannah Arendt and Theodor W. Adorno's books on totalitarianism and the Enlightenment, Bauman argues that the Holocaust should not simply be considered to be an event in Jewish history, nor a regression to pre-modern barbarism. Rather, he says, the Holocaust should be seen as deeply connected to modernity and its order-making efforts. Procedural rationality, the division of labour into smaller and smaller tasks, the taxonomic categorization of different species, and the tendency to view rule-following as morally good all played their role in the Holocaust coming to pass. For this reason, Bauman argues that modern societies have not fully taken on board the lessons of the Holocaust; it is generally viewed – to use Bauman's metaphor – like a picture hanging on a wall, offering few lessons. In Bauman's analysis, the Jews became 'strangers' par excellence in Europe; the Final Solution was pictured by him as an extreme example of the attempts made by societies to excise the uncomfortable and indeterminate elements existing within them. Bauman, like the philosopher Giorgio Agamben, contended that the same processes of exclusion that were at work in the Holocaust could, and to an extent do, still come into play today. Adorno and Horkheimer's definition of "enlightenment" The term enlightenment is understood to describe the widest sense of thought advancement. When reaching a sense of enlightenment an individual will be liberated of their fears and will be installed within society as 'masters'. This term in the sense of rationalization is seen to refine levels of cogency with formal logic, creating discourse in the framework of being a rational being as it no longer poses the same importance, individuals will want to reach full enlightenment rather than use full rationality. In their analysis of contemporary western society, Dialectic of Enlightenment (1944, revised 1947), Theodor W. Adorno and Max Horkheimer developed a wide and pessimistic concept of enlightenment. In their analysis, enlightenment had its dark side: while trying to abolish superstition and myths by 'foundationalist' philosophy, it ignored its own 'mythical' basis. Its strivings towards totality and certainty led to an increasing instrumentalization of reason. In their view, the enlightenment itself should be enlightened and not posed as a 'myth-free' view of the world. For Marxist philosophy in general, rationalization is closely associated with the concept of "commodity fetishism", for the reason that not only are products designed to fulfill certain tasks, but employees are hired to fulfill specific tasks as well. Consumption Modern food consumption typifies the process of rationalization. Where food preparation in traditional societies is more laborious and technically inefficient, modern society has strived towards speed and precision in its delivery. Fast-food restaurants, designed to maximise profit, have strived toward total efficiency since their conception, and continue to do so. A strict level of efficiency has been accomplished in several ways, including stricter control of its workers' actions, the replacement of more complicated systems with simpler, less time-consuming ones, simple numbered systems of value meals and the addition of drive-through windows. Rationalization is also observable in the replacement of more traditional stores, which may offer subjective advantages to consumers, such as what sociologists consider a less regulated, more natural environment, with modern stores offering the objective advantage of lower prices to consumers. The case of Walmart is one strong example demonstrating this transition. While Walmarts have attracted considerable criticism for effectively displacing more traditional stores, these subjective social-value concerns have held minimal effectiveness in limiting expansion of the enterprise, particularly in more rationalized nations, due to the preferences of the public for lower prices over the advantages sociologists claim for more traditional stores. The sociologist George Ritzer has used the term McDonaldization to refer, not just to the actions of the fast food restaurant, but to the general process of rationalization. Ritzer distinguishes four primary components of McDonaldization: Efficiency – the optimal method for accomplishing a task; the fastest method to get from point A to point B. Efficiency in McDonaldization means that every aspect of the organization is geared toward the minimization of time. Calculability – goals are quantifiable (i.e., sales, money) rather than subjective (i.e., taste, labour). McDonaldization developed the notion that quantity equals quality, and that a large amount of product delivered to the customer in a short amount of time is the same as a high quality product. "They run their organization in such a way that a person can walk into any McDonald's and receive the same sandwiches prepared in precisely the same way. This results in a highly rational system that specifies every action and leaves nothing to chance". Predictability – standardized and uniform services. "Predictability" means that no matter where a person goes, they will receive the same service and receive the same product at every interaction with the corporation. This also applies to the workers in those organizations; their tasks are highly repetitive and predictable routines. Control – standardized and uniform employees, replacement of human by non-human technologies. Further objects of rationalization One rational tendency is towards increasing the efficiency and output of the human body. Several means can be employed in reaching this end, including trends towards regular exercise, dieting, increased hygiene, drugs, and an emphasis on optimal nutrition. As well as increasing lifespans, these allow for stronger, leaner, more optimized bodies for quickly performing tasks. See also References Further reading Adorno, Theodor. Negative Dialectics. Translated by E.B. Ashton, London: Routledge, 1973 Bauman, Zygmunt. Modernity and The Holocaust. Ithaca, NY: Cornell University Press 1989. Green, Robert W. (ed.). Protestantism, Capitalism, and Social Science. Lexington, MA: Heath, 1973. "McDonaldzation principles", Macionis, J., and Gerber, L. (2010). Sociology, 7th edition Max Weber Sociological terminology Sociological theories Modernity
0.765735
0.986361
0.755291
Principles of learning
Researchers in the field of educational psychology have identified several principles of learning (sometimes referred to as laws of learning) which seem generally applicable to the learning process. These principles have been discovered, tested, and applied in real-world scenarios and situations. They provide additional insight into what makes people learn most effectively. Edward Thorndike developed the first three "Laws of learning": readiness, exercise, and effect. Readiness Since learning is an active process, students must have adequate rest, health, and physical ability. Basic needs of students must be satisfied before they are ready or capable of learning. Students who are exhausted or in ill health cannot learn much. If they are distracted by outside responsibilities, interests, or worries, have overcrowded schedules, or other unresolved issues, students may have little interest in learning. For example, we may identify the situation of an academic examination of a school, in which the cause of securing good marks in various subjects leads to mental and emotional readiness of students to do more hard labour in acquiring knowledge. Exercise Every time practice occurs, learning continues. These include student recall, review and summary, and manual drill and physical applications. All of these serve to create learning habits. The instructor must repeat important items of subject matter at reasonable intervals, and provide opportunities for students to practice while making sure that this process is directed toward a goal. But in some or many cases, there is no need for regular practice if the skill is acquired once. For instance if we have learnt cycling once, we will not forget the knowledge or skill even if we aren't exercising it for a long time. Effect However, every learning experience should contain elements that leave the student with some good feelings. A student's chance of success is definitely increased if the learning experience is a pleasant one. Primacy Primacy, The instructor must present subject matter in a logical order, step by step, making sure the students have already learned the preceding step. If the task is learned in isolation, if it is not initially applied to the overall performance, or if it must be relearned, the process can be confusing and time consuming. Preparing and following a lesson plan facilitates delivery of the subject matter correctly the very first time. Recency The principle of recency states that things most recently learned are best remembered. Conversely, the further a student is removed time-wise from a new fact or understanding, the more difficult it is to remember. Intensity The more intense the material taught, the more likely it will be retained. A sharp, clear, vivid, dramatic, or exciting learning experience teaches more than a routine or boring experience. The principle of intensity implies that a student will learn more from the real thing than from a substitute. Examples, analogies, and personal experiences also make learning come to life. Instructors should make full use of the senses (hearing, sight, touch, taste, smell, balance, rhythm, depth perception, and others). Freedom Since learning is an active process, students must have freedom: freedom of choice, freedom of action, freedom to bear the results of action—these are the three great freedoms that constitute personal responsibility. If no freedom is granted, students may have little interest in learning. Requirements The law of requirement states that "we must have something to obtain or do something." It can be an ability, skill, instrument or anything that may help us to learn or gain something. A starting point or root is needed; for example, if you want to draw a person, you need to have the materials with which to draw, and you must know how to draw a point, a line, a figure and so on until you reach your goal, which is to draw a person. Laws of learning applied to learning games The principles of learning have been presented as an explanation for why learning games (the use of games to introduce material, improve understanding, or increase retention) can show such incredible results. This impacts flow and motivation and increases the positive feelings toward the activity, which links back to the principles of exercise, readiness, and effect. Games use immersion and engagement as ways to create riveting experiences for players, which is part of the principle of intensity. Finally, part of the primary appeal of games is that they are fun. Although fun is hard to define, it is clear that it involves feelings such as engagement, satisfaction, pleasure, and enjoyment which are part of the principle of effect. See also References External links Contemporary Educational Psychology/Chapter 2: The Learning Process on Wikibooks Further reading Hilgard, E and G. Bower (1966). Theories of Learning. New York: Appleton Century-Crofts. Seligman, M. 1970. On the generality of the laws of learning. Psychological Review, 77, 406-418. Thorndike, E. (1932). The Fundamentals of Learning. New York: Teachers College Press. Teaching Learning theory (education)
0.767148
0.984529
0.75528
Metalinguistics
Metalinguistics is the branch of linguistics that studies language and its relationship to other cultural behaviors. It is the study of how different parts of speech and communication interact with each other and reflect the way people live and communicate together. Jacob L. Mey in his book, Trends in Linguistics, describes Mikhail Bakhtin's interpretation of metalinguistics as "encompassing the life history of a speech community, with an orientation toward a study of large events in the speech life of people and embody changes in various cultures and ages." Literacy development Metalinguistic skills involve understanding of the rules used to govern language. Scholar Patrick Hartwell points out how substantial it is for students to develop these capabilities, especially heightened phonological awareness, which is a key precursor to literacy. An essential aspect to language development is focused on the student being aware of language and the components of language. This idea is also examined in the article, 'Metalinguistic Awareness and Literacy Acquisition in Different Languages', that centers on how the construction of a language and writing strategy shape an individual's ability to read. It also discusses the manner in which bilingualism increases particular elements of metalinguistic awareness. Published research studies by Elizabeth McAllister have concluded that metalinguistic abilities are associated to cognitive development and is contingent on metalinguistic awareness which relates to reading skill level, academic success and cultural environment that starts at infancy and continues through preschool. According to Text in Education and Society, some examples of metalinguistic skills include discussing, examining, thinking about language, grammar and reading comprehension. The text also states that a student's recognition or self-correction of language in verbal and written form helps them further advance their skills. The book also illustrates manners in which literature can form connections or create boundaries between educational intelligence and practical knowledge. Gail Gillon wrote the book, Phonological Awareness, which illustrates the connection between phonological awareness and metalinguistic awareness's in literacy learning. It essentially states that a student's ability to understand the spoken word and their ability to recognize a word and decode it are dependent on each other. The text also discusses ways in which students struggling with speech impairments and reading difficulties can improve their learning process. In linguistics Linguists use this term to designate activities associated with metalanguage, a language composed of the entirety of words forming linguistic terminology (for example, syntax, semantics, phoneme, lexeme... as well as terms in more current usage, such as word, sentence, letter, etc.) Metalinguistics is used to refer to the language, whether natural or formalized (as in logic), which is itself used to speak of language; to a language whose sole function is to describe a language. The language itself must constitute the sole sphere of application for the entire vocabulary. Experts are undecided about the value of awareness of metalanguage to language learners, and some "schools of thought" in language learning have been heavily against it. Metalinguistic awareness and bilingualism Metalinguistic awareness refers to the understanding that language is a system of communication, bound to rules, and forms the basis for the ability to discuss different ways to use language (Baten, Hofman, & Loeys, 2011). In other words, it is the ability to consciously analyze language and its sub-parts, to know how they operate and how they are incorporated into the wider language system (Beceren, 2010). An individual with such ability is aware that linguistic forms and structure can interact and be manipulated to produce a vast variety of meanings. Words are only arbitrarily and symbolically associated with their referents, and are separable from them. For example, a dog is named "Cat", but the word "Cat" is only a representation for the animal, dog. It does not make the dog a cat. The term was first used by Harvard professor Courtney Cazden in 1974 to demonstrate the shift of linguistic intelligence across languages. Metalinguistic awareness in bilingual learners is the ability to objectively function outside one language system and to objectify languages’ rules, structures and functions. Code-switching and translation are examples of bilinguals’ metalinguistic awareness. Metalinguistics awareness was used as a construct in research extensively in the mid 1980s and early 1990s. Metalinguistic awareness is a theme that has frequently appeared in the study of bilingualism. It can be divided into four subcategories, namely phonological, word, syntactic and pragmatic awareness (Tunmer, Herriman, & Nesdale, 1988). Amongst the four, phonological and word awareness are the two aspects of metalinguistic awareness that have garnered the greatest attention in bilingual literacy research. Research has shown metalinguistic awareness in bilinguals to be a crucial component because of its documented relationship and positive effects on language ability, symbolic development and literacy skills. Indeed, many studies investigating the impact of bilingualism on phonological and word awareness have indicated a positive bilingual effect (Baten, et al., 2011; Chen et al., 2004; Goetz, 2003; Kang, 2010; Ransdell, Barbier, & Niit, 2006; Whitehurst & Lonigan, 1998). Bilinguals are simultaneously learning and switching between two languages, which may facilitate the development of stronger phonological awareness. It is postulated that bilinguals’ experiences of acquiring and maintaining two different languages aid them in developing an explicit and articulated understanding of how language works (Adesope, Lavin, Thompson, & Ungerleider, 2010). Hence they are equipped with stronger metalinguistic awareness as compared to their monolingual counterparts. In their book Literacy and Orality, scholars David R. Olson and Nancy Torrance explore the relationship between literacy and metalinguistic awareness, citing a link that arises from the fact that, in both reading and writing, language can become the object of thought and discussion. Prose reading and writing can be an instrument of metalinguistic reflection and in those cases one must assess the particular meaning of terms and of grammatical relations between them in order, either to understand such texts or write them. The self-referential capacity of language and metalinguistics has also been explored as problematic for interpreters and translators, who necessarily work between languages. The issue has been studied to determine how signed language interpreters render self-referential instances across languages. Because spoken and signed languages share no phonological parameters, interpreters working between two modalities use a variety of tactics to render such references, including fingerspelling, description, modeling signs, using words, pointing to objects, pointing to signs, using metalanguage, and using multiple strategies simultaneously or serially. Deaf-hearing interpreting teams, in which an interpreter who can hear and an interpreter who is deaf work together in a relay fashion, also employ a variety of strategies to render such metalinguistic references. See also Metacognition Metalinguistic Awareness References Other sources Baten, K., Hofman, F., & Loeys, T. (2011). Cross-linguistic activation in bilingual sentence processing: The role of word class meaning. Bilingualism: language and cognition, 14(3), 351-359. Beceren, S. (2010). Comparison of metalinguistic development in sequential bilinguals and monolinguals. The International Journal of Educational Researchers 2010, 1(1), 28-40. Tunmer, W. E., Herriman, M. L., & Nesdale, A. R. (1988). Metalinguistic abilities and beginning reading. Reading Research Quarterly, 23(2), 134-158. Chen, X., Anderson, R. C., Li, W., Hao, M., Wu, X., & Shu, H. (2004). Phonological awareness of bilingual and monolingual Chinese children. Journal of Educational Psychology, 96(1), 142-151. Kang, J. (2010). Do bilingual children possess better phonological awareness? Investigation of Korean monolingual and Korean-English bilingual children. Reading and Writing, 1-21. Ransdell, S., Barbier, M.-L., & Niit, T. (2006). Metacognitions about language skill and working memory among monolingual and bilingual college students: When does multilingualism matter? International Journal of Bilingual Education and Bilingualism, 9(6), 728-741. Whitehurst, G. J., & Lonigan, C. J. (1998). Child development and emergent literacy. Child Development, 69(3), 848-872. Adesope, O. O., Lavin, T., Thompson, T., & Ungerleider, C. (2010). A systematic review and meta-analysis of the cognitive correlates of bilingualism. Review of Educational Research, 80(2), 207-245. Goetz, P. J. (2003). The effects of bilingualism on the theory of mind development. Bilingualism: Language and Cognition, 6(1), 1-15. Linguistics Branches of linguistics Language acquisition
0.770081
0.980774
0.755275
Documentation
Documentation is any communicable material that is used to describe, explain or instruct regarding some attributes of an object, system or procedure, such as its parts, assembly, installation, maintenance, and use. As a form of knowledge management and knowledge organization, documentation can be provided on paper, online, or on digital or analog media, such as audio tape or CDs. Examples are user guides, white papers, online help, and quick-reference guides. Paper or hard-copy documentation has become less common. Documentation is often distributed via websites, software products, and other online applications. Documentation as a set of instructional materials shouldn't be confused with documentation science, the study of the recording and retrieval of information. Principles for producing documentation While associated International Organization for Standardization (ISO) standards are not easily available publicly, a guide from other sources for this topic may serve the purpose. Documentation development may involve document drafting, formatting, submitting, reviewing, approving, distributing, reposting and tracking, etc., and are convened by associated standard operating procedure in a regulatory industry. It could also involve creating content from scratch. Documentation should be easy to read and understand. If it is too long and too wordy, it may be misunderstood or ignored. Clear, concise words should be used, and sentences should be limited to a maximum of 15 words. Documentation intended for a general audience should avoid gender-specific terms and cultural biases. In a series of procedures, steps should be clearly numbered. Producing documentation Technical writers and corporate communicators are professionals whose field and work is documentation. Ideally, technical writers have a background in both the subject matter and also in writing, managing content, and information architecture. Technical writers more commonly collaborate with subject-matter experts, such as engineers, technical experts, medical professionals, etc. to define and then create documentation to meet the user's needs. Corporate communications includes other types of written documentation, for example: Market communications (MarCom): MarCom writers endeavor to convey the company's value proposition through a variety of print, electronic, and social media. This area of corporate writing is often engaged in responding to proposals. Technical communication (TechCom): Technical writers document a company's product or service. Technical publications can include user guides, installation and configuration manuals, and troubleshooting and repair procedures. Legal writing: This type of documentation is often prepared by attorneys or paralegals. Compliance documentation: This type of documentation codifies standard operating procedures, for any regulatory compliance needs, as for safety approval, taxation, financing, and technical approval. Healthcare documentation: This field of documentation encompasses the timely recording and validation of events that have occurred during the course of providing health care. Documentation in computer science Types The following are typical software documentation types: Request for proposal Requirements/statement of work/scope of work Software design and functional specification System design and functional specifications Change management, error and enhancement tracking User acceptance testing Manpages The following are typical hardware and service documentation types: Network diagrams Network maps Datasheet for IT systems (server, switch, e.g.) Service catalog and service portfolio (Information Technology Infrastructure Library) Software Documentation Folder (SDF) tool A common type of software document written in the simulation industry is the SDF. When developing software for a simulator, which can range from embedded avionics devices to 3D terrain databases by way of full motion control systems, the engineer keeps a notebook detailing the development "the build" of the project or module. The document can be a wiki page, Microsoft Word document or other environment. They should contain a requirements section, an interface section to detail the communication interface of the software. Often a notes section is used to detail the proof of concept, and then track errors and enhancements. Finally, a testing section to document how the software was tested. This documents conformance to the client's requirements. The result is a detailed description of how the software is designed, how to build and install the software on the target device, and any known defects and workarounds. This build document enables future developers and maintainers to come up to speed on the software in a timely manner, and also provides a roadmap to modifying code or searching for bugs. Software tools for network inventory and configuration These software tools can automatically collect data of your network equipment. The data could be for inventory and for configuration information. The Information Technology Infrastructure Library requests to create such a database as a basis for all information for the IT responsible. It is also the basis for IT documentation. Examples include XIA Configuration. Documentation in criminal justice "Documentation" is the preferred term for the process of populating criminal databases. Examples include the National Counterterrorism Center's Terrorist Identities Datamart Environment, sex offender registries, and gang databases. Documentation in early childhood education Documentation, as it pertains to the early childhood education field, is "when we notice and value children's ideas, thinking, questions, and theories about the world and then collect traces of their work (drawings, photographs of the children in action, and transcripts of their words) to share with a wider community". Thus, documentation is a process, used to link the educator's knowledge and learning of the child/children with the families, other collaborators, and even to the children themselves. Documentation is an integral part of the cycle of inquiry - observing, reflecting, documenting, sharing and responding. Pedagogical documentation, in terms of the teacher documentation, is the "teacher's story of the movement in children's understanding". According to Stephanie Cox Suarez in "Documentation - Transforming our Perspectives", "teachers are considered researchers, and documentation is a research tool to support knowledge building among children and adults". Documentation can take many different styles in the classroom. The following exemplifies ways in which documentation can make the research, or learning, visible: Documentation panels (bulletin-board-like presentation with multiple pictures and descriptions about the project or event). Daily log (a log kept every day that records the play and learning in the classroom) Documentation developed by or with the children (when observing children during documentation, the child's lens of the observation is used in the actual documentation) Individual portfolios (documentation used to track and highlight the development of each child) Electronic documentation (using apps and devices to share documentation with families and collaborators) Transcripts or recordings of conversations (using recording in documentation can bring about deeper reflections for both the educator and the child) Learning stories (a narrative used to "describe learning and help children see themselves as powerful learners") The classroom as documentation (reflections and documentation of the physical environment of a classroom). Documentation is certainly a process in and of itself, and it is also a process within the educator. The following is the development of documentation as it progresses for and in the educator themselves: Develop(s) habits of documentation Become(s) comfortable with going public with recounting of activities Develop(s) visual literacy skills Conceptualize(s) the purpose of documentation as making learning styles visible, and Share(s) visible theories for interpretation purposes and further design of curriculum. See also Authoring Bibliographic control Change control Citation Index Copyright Description Document Documentation (field) Documentation science Document identifier Document management system Documentary Freedom of information Glossary Historical document Index (publishing) ISO 2384:1977 ISO 259:1984 ISO 5123:1984 ISO 3602:1989 ISO 6357:1985 ISO 690 ISO 5964 ISO 9001 IEC 61355 International Standard Bibliographic Description Journal of Documentation Licensing Letterhead List of Contents Technical documentation User guide Medical certificate Publishing Records management Software documentation Style guide Technical communication References External links IEEE Professional Communication Society Documentation Definition by The Linux Information Project (LINFO) Information & Documentation List of selected tools Library of articles on documentation: Technical writing and documentation articles IRISH DRIVING LICENSE, Information & Documentation Technical communication Information science
0.760172
0.993538
0.75526
Competence (human resources)
Competence is the set of demonstrable characteristics and skills that enable and improve the efficiency or performance of a job. Competency is a series of knowledge, abilities, skills, experiences and behaviors, which leads to effective performance in an individual's activities. Competency is measurable and can be developed through training. Some scholars see "competence" as a combination of practical & theoretical knowledge, cognitive skills, behavior, and values used to improve performance; or as the state or quality of being adequately or well qualified, having the ability to perform a specific role. For instance, management competency might include system thinking and emotional intelligence, as well as skills in influence and negotiation. Etymology The term "competence" first appeared in an article authored by R.W. White in 1959 as a concept for performance motivation. In 1970, Craig C. Lundberg defined this concept as "Planning the Executive Development Program". The term gained traction in 1973 when David McClelland wrote a seminal paper entitled, "Testing for Competence Rather Than for Intelligence". The term, created by McClelland, was commissioned by the State Department to explain characteristics common to high-performing agents of embassy, as well as help them in recruitment and development. It has since been popularized by Richard Boyatzis, and many others including T.F. Gilbert (1978), who used the concept in performance improvement. Its uses vary widely, which has led to considerable misunderstanding. Studies on competency indicate that competency covers a very complicated and extensive field, with different scientists having different definitions of competency. In 1982, Zemek conducted a study on the definition of competence. He interviewed several specialists in the field of training to evaluate what creates competence. After the interviews, he concluded: "There is no clear and unique agreement about what makes competency." Competency has multiple different meanings, and remains one of the most diffuse terms in the management development sector, and the organizational and occupational literature. Here are several definitions of competence by various researchers: Hayes (1979): Competence generally includes knowledge, motivation, social characteristic and roles, or skills of one person in accordance with the demands of organizations of their clerks. Boyatzis (1982): Competence lies in the individual's capacity which superposes the person's behavior with needed parameters as the results of this adaptation make the organization to hire him. Albanese (1989): Competence is made of individual characteristics which are used to effect an organization's management. Woodruff (1991): Competence is a combination of two topics: personal competence and personal merit at work. Personal merit refers to the skill a person has in a particular work environment. This is dependent on a person's true competence in his/her field. Mansfield (1997): The personal specifications which effect a better performance are called competence. Standard (2001) ICB (IPMA Competence Baseline): Competence is made of knowledge, personal attitudes, skills and related experiences which are needed for the person's success. Rankin (2002): A collection of behaviors and skills which people are expected to show in their organization. Unido (United Nations Industrial Development Organization) (2002): Competence is defined as knowledge, skill and specifications which can cause a person to act better. This does not consider their special proficiency in that job. Industrial Development Organization of United States (2002): Competence is a collection of personal skills related to knowledge and personal specifications which can create competence in people without having practice and other specialized knowledge. CRNBC (College Of Registered Nurses Of British Columbia) (2009): Competence is a collection of knowledge, skills, behavior and power of judging which can cause competence in people without having sufficient practice or specialized knowledge. Hay group (2012): Measurable characteristics of a person which are related to efficient actions at work, organization and special culture. The following definitions are applicable to the term competency: Chan and her team (the University of Hong Kong) (2017, 2019): Holistic competency is an umbrella term inclusive of different types of generic skills (e.g. critical thinking, problem-solving skills), positive values, and attitudes (e.g. resilience, appreciation for others) which are essential for students' life-long learning and whole-person development. The ARZESH Competency Model (2018): Competency is a series of knowledge, abilities, skills, experiences and behaviors, which leads to effective performance in an individual's activities. Competency is measurable and can be developed through training. It can also be broken down into smaller criteria. The most recent definition has been formalized by Javier Perez-Capdevila in 2017, who has written that the competences are fusions obtained from the complete mixture of the fuzzy sets of aptitudes and attitudes possessed by employees, both in a general and singular way. In these fusions, the degree of belonging to the resulting group expresses the extent to which these competencies are possessed. Human resource management Competency is also used as a more general description of requirements for human beings in organizations and communities. Competencies and competency models may be applicable to all employees in an organization or they may be position specific. Competencies are also what people need to be successful in their jobs. Job competencies are not the same as job task. Competencies include all the related knowledge, skills, abilities, and attributes that form a person's job. This set of context-specific qualities is correlated with superior job performance and can be used as a standard against which to measure job performance as well as to develop, recruit, and hire employees. Competencies provide organizations with a way to define in behavioral terms what it is that people need to do to produce the results that the organization desires, in a way that is in keep with its culture. By having competencies defined in the organization, it allows employees to know what they need to be productive. When properly defined, competencies, allows organizations to evaluate the extent to which behaviors employees are demonstrating and where they may be lacking. For competencies where employees are lacking, they can learn. This will allow organizations to know potentially what resources they may need to help the employee develop and learn those competencies. Competencies can distinguish and differentiate an organization from competitors. While two organizations may be alike in financial results, the way in which the results were achieve could be different based on the competencies that fit their particular strategy and organizational culture. Lastly, competencies can provide a structured model that can be used to integrate management practices throughout the organization. Competencies that align their recruiting, performance management, training and development and reward practices to reinforce key behaviors that the organization values. Competencies required for a post are identified through job analysis or task analysis, using techniques such as the critical incident technique, work diaries, and work sampling. A future focus is recommended for strategic reasons. If someone is able to do required tasks at the target level of proficiency, they are considered "competent" in that area. For instance, management competency might include system thinking and emotional intelligence, as well as skills in influence and negotiation. Identifying employee competencies can contribute to improved organizational performance. They are most effective if they meet several critical standards, including linkage to, and leverage within an organization's human resource system. Competency development The process of competency development is a lifelong series of doing and reflecting. As competencies apply to careers as well as jobs, lifelong competency development is linked with personal development as a management concept. And it requires a special environment, where the rules are necessary in order to introduce novices, but people at a more advanced level of competency will systematically break the rules if the situations requires it. This environment is synonymously described using terms such as learning organization, knowledge creation, self-organizing and empowerment. Within a specific organization or professional community, professional competency is frequently valued. They are usually the same competencies that must be demonstrated in a job interview. But today there is another way of looking at it: that there are general areas of occupational competency required to retain a post, or earn a promotion. For all organizations and communities there is a set of primary tasks that competent people have to contribute to all the time. For a university student, for example, the primary tasks could be handling theory, methods or the information of an assignment. In emergencies, competent people may react to a situation following behaviors they have previously found successful. To be competent a person would need to be able to interpret the situation in the context and have a repertoire of possible actions to take. Being sufficiently trained in each possible action included in their repertoire can make a great difference. Regardless of training, competency grows through experience and the extent of an individual's capacity to learn and adapt. Research has found that it is not easy to assess competencies and competence development. Skill acquisition Dreyfus and Dreyfus introduced nomenclature for the levels of competence in competency development. The five levels proposed by Dreyfus and Dreyfus are part of what is now referred to as the Dreyfus model of skill acquisition: Novice: Rule-based behavior, strongly limited and inflexible Experienced Beginner: Incorporates aspects of the situation Practitioner: Acting consciously from long-term goals and plans Knowledgeable practitioner: Sees the situation as a whole and acts from personal conviction Expert: Has an intuitive understanding of the situation and zooms in on the central aspects Four areas of competency Dreyfus and Dreyfus also introduced four general areas of competency: Meaning competency: The person assessed must be able to identify with the purpose of the organization or community and act from the preferred future in accordance with the values of the organization or community. Relation competency: The ability to create and nurture connections to the stakeholders of the primary tasks must be shown. Learning competency: The person assessed must be able to create and look for situations that make it possible to experiment with the set of solutions that make it possible to complete the primary tasks and reflect on the experience. Change competency: The person assessed must be able to act in new ways when it will promote the purpose of the organization or community and make the preferred future come to life. Four stages of competence Types of competencies Fayek & Omar (2016) have formulated six types of competencies in relation to the construction industry: Behavioral competencies: Individual performance competencies are more specific than organizational competencies and capabilities. As such, it is important that they be defined in a measurable behavioral context in order to validate applicability and the degree of expertise (e.g. development of talent) Core competencies: Capabilities and/or technical expertise unique to an organization, i.e. core competencies differentiate an organization from its competition (e.g. the technologies, methodologies, strategies or processes of the organization that create competitive advantage in the marketplace). An organizational core competency is an organization's strategic strength. Core competencies differentiate an organization from its competition and create a company's competitive advantage in the marketplace. Functional competencies: Functional competencies are job-specific competencies that drive proven high-performance, quality results for a given position. They are often technical or operational in nature (e.g., "backing up a database" is a functional competency). Management competencies: Management competencies identify the specific attributes and capabilities that illustrate an individual's management potential. Unlike leadership characteristics, management characteristics can be learned and developed with the proper training and resources. Competencies in this category should demonstrate pertinent behaviors for management to be effective. Organizational competencies: The mission, vision, values, culture and core competencies of the organization that sets the tone and/or context in which the work of the organization is carried out (e.g. customer-driven, risk taking and cutting edge). How we treat the patient is part of the patient's treatment. Technical competencies: Depending on the position, both technical and performance capabilities should be weighed carefully as employment decisions are made. For example, organizations that tend to hire or promote solely on the basis of technical skills, i.e. to the exclusion of other competencies, may experience an increase in performance-related issues (e.g. systems software designs versus relationship management skills) Examples of competences Here are some examples of competences: Attention to detail Is alert in a high-risk environment; follows detailed procedures and ensures accuracy in documentation and data; carefully monitors gauges, instruments or processes; concentrates on routine work details; organizes and maintains a system of records. Commitment to safety Understands, encourages and carries out the principles of integrated safety management; complies with or oversees the compliance with Laboratory safety policies and procedures; completes all required ES&H training; takes personal responsibility for safety. Communication Writes and speaks effectively, using conventions proper to the situation; states own opinions clearly and concisely; demonstrates openness and honesty; listens well during meetings and feedback sessions; explains reasoning behind own opinions; asks others for their opinions and feedback; asks questions to ensure understanding; exercises a professional approach with others using all appropriate tools of communication; uses consideration and tact when offering opinions. Cooperation/teamwork Works harmoniously with others to get a job done; responds positively to instructions and procedures; able to work well with staff, co-workers, peers and managers; shares critical information with everyone involved in a project; works effectively on projects that cross functional lines; helps to set a tone of cooperation within the work group and across groups; coordinates own work with others; seeks opinions; values working relationships; when appropriate facilitates discussion before decision-making process is complete. Customer service Listens and responds effectively to customer questions; resolves customer problems to the customer's satisfaction; respects all internal and external customers; uses a team approach when dealing with customers; follows up to evaluate customer satisfaction; measures customer satisfaction effectively; commits to exceeding customer expectations. Flexibility Remains open-minded and changes opinions on the basis of new information; performs a wide variety of tasks and changes focus quickly as demands change; manages transitions from task to task effectively; adapts to varying customer needs. Job knowledge/technical knowledge Demonstrates knowledge of techniques, skills, equipment, procedures and materials. Applies knowledge to identify issues and internal problems; works to develop additional technical knowledge and skills. Initiative and creativity Plans work and carries out tasks without detailed instructions; makes constructive suggestions; prepares for problems or opportunities in advance; undertakes additional responsibilities; responds to situations as they arise with minimal supervision; creates novel solutions to problems; evaluates new technology as potential solutions to existing problems. Innovation Able to challenge conventional practices; adapts established methods for new uses; pursues ongoing system improvement; creates novel solutions to problems; evaluates new technology as potential solutions to existing problems. Judgement Makes sound decisions; bases decisions on fact rather than emotion; analyzes problems skillfully; uses logic to reach solutions. Leadership Able to become a role model for the team and lead from the front. Reliable and have the capacity to motivate subordinates. Solves problems and takes important decisions. Organization Able to manage multiple projects; able to determine project urgency in a practical way; uses goals to guide actions; creates detailed action plans; organizes and schedules people and tasks effectively. Problem solving Anticipates problems; sees how a problem and its solution will affect other units; gathers information before making decisions; weighs alternatives against objectives and arrives at reasonable decisions; adapts well to changing priorities, deadlines and directions; works to eliminate all processes which do not add value; is willing to take action, even under pressure, criticism or tight deadlines; takes informed risks; recognizes and accurately evaluates the signs of a problem; analyzes current procedures for possible improvements; notifies supervisor of problems in a timely manner. Quality control Establishes high standards and measures; is able to maintain high standards despite pressing deadlines; does work right the first time and inspects work for flaws; tests new methods thoroughly; considers excellence a fundamental priority. Quality of work Maintains high standards despite pressing deadlines; does work right the first time; corrects own errors; regularly produces accurate, thorough, professional work. Quantity of work Produces an appropriate quantity of work; does not get bogged down in unnecessary detail; able to manage multiple projects; able to determine project urgency in a meaningful and practical way; organizes and schedules people and tasks. Reliability Personally responsible; completes work in a timely, consistent manner; works hours necessary to complete assigned work; is regularly present and punctual; arrives prepared for work; is committed to doing the best job possible; keeps commitments. Responsiveness to requests for service Responds to requests for service in a timely and thorough manner; does what is necessary to ensure customer satisfaction; prioritizes customer needs; follows up to evaluate customer satisfaction. Staff development Works to improve the performance of oneself and others by pursuing opportunities for continuous learning/feedback; constructively helps and coaches others in their professional development; exhibits a “can-do” approach and inspires associates to excel; develops a team spirit. Support of diversity Treats all people with respect; values diverse perspectives; participates in diversity training opportunities; provides a supportive work environment for the multicultural workforce; applies the employer's philosophy of equal employment opportunity; shows sensitivity to individual differences; treats others fairly without regard to race, sex, color, religion, or sexual orientation; recognizes differences as opportunities to learn and gain by working together; values and encourages unique skills and talents; seeks and considers diverse perspectives and ideas. Competency models Many Human Resource professionals are employing a competitive competency model to strengthen nearly every facet of talent management—from recruiting and performance management, to training and development, to succession planning and more. A job competency model is a comprehensive, behaviorally based job description that both potential and current employees and their managers can use to measure and manage performance and establish development plans. Often there is an accompanying visual representative competency profile as well. One of the most common pitfalls that organizations stumble upon is that when creating a competency model they focus too much on job descriptions instead the behaviors of an employee. Experts say that the steps required to create a competency model include: Gathering information about job roles. Interviewing subject matter experts to discover current critical competencies and how they envision their roles changing in the future. Identifying high-performer behaviors. Creating, reviewing (or vetting) and delivering the competency model. Once the competency model has been created, the final step involves communicating how the organization plans to use the competency model to support initiatives such as recruiting, performance management, career development, succession planning as well as other HR business processes. See also , the tendency for incompetent people to grossly overestimate their skills , the tendency for competent workers to be promoted just beyond the level of their competence , management style References Further reading Eraut, M. (1994). Developing Professional Knowledge and Competence. London: Routledge. Gilbert, T.F. (1978). Human Competence. Engineering Worthy Performance. New York: McGraw-Hill. Human resource management Skills
0.760864
0.992613
0.755243
Ecomodernism
Ecomodernism is an environmental philosophy which argues that technological development can protect nature and improve human wellbeing through eco-economic decoupling, i.e., by separating economic growth from environmental impacts. Description Ecomodernism embraces substituting natural ecological services with energy, technology, and synthetic solutions as long as they help reduce impact on environment. Among other things, ecomodernists embrace high-tech farming techniques to produce more food using less land and water, thus freeing up areas for conservation (precision agriculture, vertical farming, regenerative agriculture and genetically modified foods) and cellular agriculture (cultured meat) and alternative proteins, fish from aquaculture farms, desalination and water purification technologies, advanced waste recycling and circular economy, sustainable forestry and ecological restoration of natural habitats and biodiversity which includes a wide scope of projects including erosion control, reforestation, removal of non-native species and weeds, revegetation of degraded lands, daylighting streams, the reintroduction of native species (preferably native species that have local adaptation), and habitat and range improvement for targeted species, water conservation, Building Information Modeling in green building, green building and green infrastructure, smart grids, resource efficiency, urbanization, smart city, urban density and verticalization, adoption of electric vehicles and hydrogen vehicles, use of drone light shows, projection mapping and 3D holograms to provide a sustainable technological alternatives to fireworks, automation, carbon capture and storage and direct air capture, green nanotechnology (nanofilters for water purification, nanomaterials for air pollution control, nanocatalysts for more efficient chemical processes, nanostructured materials for improved solar cells, nanomaterials for enhancing battery performance, nanoparticles for soil and groundwater remediation and nanosensors for detecting pollutants), energy storage, alternative materials such as bioplastics and bio-based materials and high-tech materials such as graphene and carbon fibers, clean energy transition i.e. replacing low power-density energy sources (e.g. firewood in low-income countries, which leads to deforestation) with high power-density sources as long as their net impact on environment is lower (nuclear power plants, and advanced renewable energy sources), artificial intelligence for resource optimization (predictive maintenance in industrial settings to reduce waste, optimized routing for transportation to reduce fuel consumption, AI-driven climate modeling for better environmental predictions and supply chain optimization to reduce transportation emissions), climate engineering, synthetic fuels and biofuels, 3D printing, 3D food printing, digitalization, miniaturization, servitization of products and dematerialization. Key among the goals of an ecomodern environmental ethic is the use of technology to intensify human activity and make more room for wild nature. Debates that form the foundation of ecomodernism were born from disappointment in traditional organizations who denied energy sources such as nuclear power, thus leading to an increase of reliance of fossil gas and increase of emissions instead of reduction (e.g. Energiewende). Coming from evidence-based, scientific and pragmatic positions, ecomodernism engages in the debate on how to best protect natural environments, how to accelerate decarbonization to mitigate climate change, and how to accelerate the economic and social development of the world's poor. In these debates, ecomodernism distinguishes itself from other schools of thought, including ecological economics, degrowth, population reduction, laissez-faire economics, the "soft energy" path, and central planning. Ecomodernism draws on American pragmatism, political ecology, evolutionary economics, and modernism. Diversity of ideas and dissent are claimed values in order to avoid the intolerance born of extremism and dogmatism. Ecomodernist organisations have been established in many countries, including Germany, Finland, and Sweden. While the word 'ecomodernism' has only been used to describe modernist environmentalism since 2013, the term has a longer history in academic design writing and Ecomodernist ideas were developed within a number of earlier texts, including Martin Lewis's Green Delusions, Stewart Brand's Whole Earth Discipline and Emma Marris's Rambunctious Garden. In their 2015 manifesto, 18 self-professed ecomodernists—including scholars from the Breakthrough Institute, Harvard University, Jadavpur University, and the Long Now Foundation—sought to clarify the movement's vision: "we affirm one long-standing environmental ideal, that humanity must shrink its impacts on the environment to make more room for nature, while we reject another, that human societies must harmonize with nature to avoid economic and ecological collapse." An Ecomodernist Manifesto In April 2015, a group of 18 self-described ecomodernists collectively published An Ecomodernist Manifesto. Reception and criticism Some environmental journalists have praised An Ecomodernist Manifesto. At The New York Times, Eduardo Porter wrote approvingly of ecomodernism's alternative approach to sustainable development. In an article titled "Manifesto Calls for an End to 'People Are Bad' Environmentalism", Slate's Eric Holthaus wrote "It's inclusive, it's exciting, and it gives environmentalists something to fight for for a change." The science journal Nature editorialized the manifesto. Ecomodernism has been criticized for inadequately recognizing what Holly Jean Buck, Assistant Professor of Environment and Sustainability, says is the exploitative, violent and unequal dimensions of technological modernisation. Sociologist Eileen Crist, Associate Professor Emerita, observed that ecomodernism is founded on a western philosophy of humanism with no regard to "nonhuman freedoms". Of the Manifesto Crist says Human Geographer Rosemary-Claire Collard and co-authors assert that ecomodernism is incompatible with neoliberal capitalism, despite the philosophy's claims to the contrary. By contrast, in his book "Ecomodernism: Technology, Politics and the Climate Crisis" Jonathan Symons argues that ecomodernism belongs in the social democratic tradition, promoting a third way between laissez-faire and anti-capitalism, and calling for transformative state investments in technological transformation and human development. Likewise, in "A sympathetic diagnosis of the Ecomodernist Manifesto", Paul Robbins and Sarah A. Moore describe the similarities and points of departure between ecomodernism and political ecology. Another major strand of criticism towards ecomodernism comes from proponents of degrowth or the steady-state economy. Eighteen ecological economists published a long rejoinder titled "A Degrowth Response to an Ecomodernist Manifesto", writing "the ecomodernists provide neither a very inspiring blueprint for future development strategies nor much in the way of solutions to our environmental and energy woes." At the Breakthrough Institute's annual Dialogue in June 2015, several environmental scholars offered a critique of ecomodernism. Bruno Latour argued that the modernity celebrated in An Ecomodernist Manifesto is a myth. Jenny Price argued that the manifesto offered a simplistic view of "humanity" and "nature", which she said are "made invisible" by talking about them in such broad terms. See also Bright green environmentalism Earthship Ecological civilization Ecological modernization Environmental technology Reflexive modernization Solarpunk Technogaianism Utopian architecture Nuclear power proposed as renewable energy References External links Bright green environmentalism Environmentalism Environmental social science concepts Environmental philosophy
0.770669
0.97995
0.755216
Deliberative democracy
Deliberative democracy or discursive democracy is a form of democracy in which deliberation is central to decision-making. Deliberative democracy seeks quality over quantity by limiting decision-makers to a smaller but more representative sample of the population that is given the time and resources to focus on one issue. It often adopts elements of both consensus decision-making and majority rule. Deliberative democracy differs from traditional democratic theory in that authentic deliberation, not mere voting, is the primary source of legitimacy for the law. Deliberative democracy is related to consultative democracy, in which public consultation with citizens is central to democratic processes. The distance between deliberative democracy and concepts like representative democracy or direct democracy is debated. While some practitioners and theorists use deliberative democracy to describe elected bodies whose members propose and enact legislation, Hélène Landemore and others increasingly use deliberative democracy to refer to decision-making by randomly-selected lay citizens with equal power. Deliberative democracy has a long history of practice and theory traced back to ancient times, with an increase in academic attention in the 1990s, and growing implementations since 2010. Joseph M. Bessette has been credited with coining the term in his 1980 work Deliberative Democracy: The Majority Principle in Republican Government. Overview Deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting. Authentic deliberation is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtains through economic wealth or the support of interest groups. The roots of deliberative democracy can be traced back to Aristotle and his notion of politics; however, the German philosopher Jürgen Habermas' work on communicative rationality and the public sphere is often identified as a major work in this area. Deliberative democracy can be practiced by decision-makers in both representative democracies and direct democracies. In elitist deliberative democracy, principles of deliberative democracy apply to elite societal decision-making bodies, such as legislatures and courts; in populist deliberative democracy, principles of deliberative democracy apply to groups of lay citizens who are empowered to make decisions. One purpose of populist deliberative democracy can be to use deliberation among a group of lay citizens to distill a more authentic public opinion about societal issues for other decision-makers to consider; devices such as the deliberative opinion poll have been designed to achieve this goal. Another purpose of populist deliberative democracy can, like direct democracy, result directly in binding law. If political decisions are made by deliberation but not by the people themselves or their elected representatives, then there is no democratic element; this deliberative process is called elite deliberation. James Fearon and Portia Pedro believe deliberative processes most often generate ideal conditions of impartiality, rationality and knowledge of the relevant facts, resulting in more morally correct outcomes. Former diplomat Carne Ross contends that the processes more civil, collaborative, and evidence-based than the debates in traditional town hall meetings or in internet forums if citizens know their debates will impact society. Some fear the influence of a skilled orator. John Burnheim critiques representative democracy as requiring citizens to vote for a large package of policies and preferences bundled together, much of which a voter might not want. He argues that this does not translate voter preferences as well as deliberative groups, each of which are given the time and the ability to focus on one issue. Characteristics Fishkin's model of deliberation James Fishkin, who has designed practical implementations of deliberative democracy through deliberative polling for over 15 years in various countries, describes five characteristics essential for legitimate deliberation: Information: The extent to which participants are given access to reasonably accurate information that they believe to be relevant to the issue Substantive balance: The extent to which arguments offered by one side or from one perspective are answered by considerations offered by those who hold other perspectives Diversity: The extent to which the major positions in the public are represented by participants in the discussion Conscientiousness: The extent to which participants sincerely weigh the merits of the arguments Equal consideration: The extent to which arguments offered by all participants are considered on the merits regardless of which participants offer them Studies by James Fishkin and others have concluded that deliberative democracy tends to produce outcomes which are superior to those in other forms of democracy. Desirable outcomes in their research include less partisanship and more sympathy with opposing views; more respect for evidence-based reasoning rather than opinion; a greater commitment to the decisions taken by those involved; and a greater chance for widely shared consensus to emerge, thus promoting social cohesion between people from different backgrounds. Fishkin cites extensive empirical support for the increase in public spiritedness that is often caused by participation in deliberation, and says theoretical support can be traced back to foundational democratic thinkers such as John Stuart Mill and Alexis de Tocqueville. Cohen's outline Joshua Cohen, a student of John Rawls, argued that the five main features of deliberative democracy include: An ongoing independent association with expected continuation. The citizens in the democracy structure their institutions such that deliberation is the deciding factor in the creation of the institutions and the institutions allow deliberation to continue. A commitment to the respect of a pluralism of values and aims within the polity. The citizens consider deliberative procedure as the source of legitimacy, and prefer the causal history of legitimation for each law to be transparent and easily traceable to the deliberative process. Each member recognizes and respects other members' deliberative capacity. Cohen presents deliberative democracy as more than a theory of legitimacy, and forms a body of substantive rights around it based on achieving "ideal deliberation": It is free in two ways: The participants consider themselves bound solely by the results and preconditions of the deliberation. They are free from any authority of prior norms or requirements. The participants suppose that they can act on the decision made; the deliberative process is a sufficient reason to comply with the decision reached. Parties to deliberation are required to state reasons for their proposals, and proposals are accepted or rejected based on the reasons given, as the content of the very deliberation taking place. Participants are equal in two ways: Formal: anyone can put forth proposals, criticize, and support measures. There is no substantive hierarchy. Substantive: The participants are not limited or bound by certain distributions of power, resources, or pre-existing norms. "The participants…do not regard themselves as bound by the existing system of rights, except insofar as that system establishes the framework of free deliberation among equals." Deliberation aims at a rationally motivated consensus: it aims to find reasons acceptable to all who are committed to such a system of decision-making. When consensus or something near enough is not possible, majoritarian decision making is used. In Democracy and Liberty, an essay published in 1998, Cohen updated his idea of pluralism to "reasonable pluralism" – the acceptance of different, incompatible worldviews and the importance of good faith deliberative efforts to ensure that as far as possible the holders of these views can live together on terms acceptable to all. Gutmann and Thompson's model Amy Gutmann and Dennis F. Thompson's definition captures the elements that are found in most conceptions of deliberative democracy. They define it as "a form of government in which free and equal citizens and their representatives justify decisions in a process in which they give one another reasons that are mutually acceptable and generally accessible, with the aim of reaching decisions that are binding on all at present but open to challenge in the future". They state that deliberative democracy has four requirements, which refer to the kind of reasons that citizens and their representatives are expected to give to one another: Reciprocal. The reasons should be acceptable to free and equal persons seeking fair terms of cooperation. Accessible. The reasons must be given in public and the content must be understandable to the relevant audience. Binding. The reason-giving process leads to a decision or law that is enforced for some period of time. The participants do not deliberate just for the sake of deliberation or for individual enlightenment. Dynamic or Provisional. The participants must keep open the possibility of changing their minds, and continuing a reason-giving dialogue that can challenge previous decisions and laws. Standards of good deliberation - from first to second generation (Bächtiger et al., 2018) For Bächtiger, Dryzek, Mansbridge and Warren, the ideal standards of "good deliberation" which deliberative democracy should strive towards have changed: History Early examples Consensus-based decision making similar to deliberative democracy has been found in different degrees and variations throughout the world going back millennia. The most discussed early example of deliberative democracy arose in Greece as Athenian democracy during the sixth century BC. Athenian democracy was both deliberative and largely direct: some decisions were made by representatives but most were made by "the people" directly. Athenian democracy came to an end in 322 BC. Even some 18th century leaders advocating for representative democracy mention the importance of deliberation among elected representatives. Recent scholarship The deliberative element of democracy was not widely studied by academics until the late 20th century. According to Professor Stephen Tierney, perhaps the earliest notable example of academic interest in the deliberative aspects of democracy occurred in John Rawls 1971 work A Theory of Justice. Joseph M. Bessette has been credited with coining the term "deliberative democracy" in his 1980 work Deliberative Democracy: The Majority Principle in Republican Government, and went on to elaborate and defend the notion in "The Mild Voice of Reason" (1994). In the 1990s, deliberative democracy began to attract substantial attention from political scientists. According to Professor John Dryzek, early work on deliberative democracy was part of efforts to develop a theory of democratic legitimacy. Theorists such as Carne Ross advocate deliberative democracy as a complete alternative to representative democracy. The more common view, held by contributors such as James Fishkin, is that direct deliberative democracy can be complementary to traditional representative democracy. Others contributing to the notion of deliberative democracy include Carlos Nino, Jon Elster, Roberto Gargarella, John Gastil, Jürgen Habermas, David Held, Joshua Cohen, Amy Gutmann, Noëlle McAfee, Rense Bos, Jane Mansbridge, Jose Luis Marti, Dennis Thompson, Benny Hjern, Hal Koch, Seyla Benhabib, Ethan Leib, Charles Sabel, Jeffrey K. Tulis, David Estlund, Mariah Zeisberg, Jeffrey L. McNairn, Iris Marion Young, Robert B. Talisse, and Hélène Landemore. Although political theorists took the lead in the study of deliberative democracy, political scientists have in recent years begun to investigate its processes. One of the main challenges currently is to discover more about the actual conditions under which the ideals of deliberative democracy are more or less likely to be realized. Drawing on the work of Hannah Arendt, Shmuel Lederman laments the fact that "deliberation and agonism have become almost two different schools of thought" that are discussed as "mutually exclusive conceptions of politics" as seen in the works of Chantal Mouffe, Ernesto Laclau, and William E. Connolly. Giuseppe Ballacci argues that agonism and deliberation are not only compatible but mutually dependent: "a properly understood agonism requires the use of deliberative skills but also that even a strongly deliberative politics could not be completely exempt from some of the consequences of agonism". Most recently, scholarship has focused on the emergence of a 'systemic approach' to the study of deliberation. This suggests that the deliberative capacity of a democratic system needs to be understood through the interconnection of the variety of sites of deliberation which exist, rather than any single setting. Some studies have conducted experiments to examine how deliberative democracy addresses the problems of sustainability and underrepresentation of future generations. Although not always the case, participation in deliberation has been found to shift participants opinions in favour of environmental positions. Platforms and algorithms Aviv Ovadya also argues for implementing bridging-based algorithms in major platforms by empowering deliberative groups that are representative of the platform's users to control the design and implementation of the algorithm. He argues this would reduce sensationalism, political polarization and democratic backsliding. Jamie Susskind likewise calls for deliberative groups to make these kind of decisions. Meta commissioned a representative deliberative process in 2022 to advise the company on how to deal with climate misinformation on its platforms. Modern examples The OECD documented hundreds of examples and finds their use increasing since 2010. For example, a representative sample of 4000 lay citizens used a 'Citizens' congress' to coalesce around a plan on how to rebuild New Orleans after Hurricane Katrina. See also Deliberative assembly Deliberative referendum Group decision making Jury Informed consent Liquid democracy Mediated deliberation Open source governance Participatory democracy Political equality Public reason References Sources Bessette, Joseph (1980) "Deliberative Democracy: The Majority Principle in Republican Government," in How Democratic is the Constitution?, Washington, D.C., AEI Press. pp. 102–116. Bessette, Joseph, (1994) The Mild Voice of Reason: Deliberative Democracy & American National Government Chicago: University of Chicago Press. Blattberg, C. (2003) "Patriotic, Not Deliberative, Democracy" Critical Review of International Social and Political Philosophy 6, no. 1, pp. 155–74. Reprinted as ch. 2 of Blattberg, C. (2009) Patriotic Elaborations: Essays in Practical Philosophy. Montreal and Kingston: McGill-Queen's University Press. Cohen, J. (1989) "Deliberative Democracy and Democratic Legitimacy" (Hamlin, A. and Pettit, P. eds.), The Good Polity. Oxford: Blackwell. pp. 17–34 Cohen, J. (1997) "Deliberation and Democratic Legitimacy" (James Bohman & William Rehg eds.) Deliberative Democracy: Essays on Reason and Politics (Bohman, J. and Rehg, W. eds.). Fishkin, James & Peter Laslett, eds. (2003). Debating Deliberative Democracy. Wiley-Blackwell. Gutmann, Amy and Dennis Thompson (1996). Democracy and Disagreement. Princeton University Press. Gutmann, Amy and Dennis Thompson (2002). Why Deliberative Democracy? Princeton University Press. Leib, Ethan J. "Can Direct Democracy Be Made Deliberative?", Buffalo Law Review, Vol. 54, 2006 Owen, D. and Smith, G. (2015). "Survey article: Deliberation, democracy, and the systemic turn." Journal of Political Philosophy 23.2: 213-234 Painter, Kimberly, (2013) "Deliberative Democracy in Action: Exploring the 2012 City of Austin Bond Development Process" Applied Research Project Texas State University. Steenhuis, Quinten. (2004) "The Deliberative Opinion Poll: Promises and Challenges". Carnegie Mellon University. Unpublished thesis. Available Online Talisse, Robert, (2004) Democracy after Liberalism Publisher: Routledge Thompson, Dennis F (2008). "Deliberative Democratic Theory and Empirical Political Science," Annual Review of Political Science 11: 497-520. Tulis, Jeffrey K., (1988) The Rhetorical Presidency Publisher: Princeton University Press Tulis, Jeffrey K., (2003) "Deliberation Between Institutions," in Debating Deliberative Democracy, eds. James Fishkin and Peter Laslett. Wiley-Blackwell. Uhr, J. (1998) Deliberative Democracy in Australia: The Changing Place of Parliament, Cambridge: Cambridge University Press Further reading Deliberative Democracy for Diabolical Times'' (2024) by André Bächtiger and John S. Dryzek. Cambridge University Press. External links Website of Centre for Deliberative Democracy and Global Governance, University of Canberra describes itself as "the world-leading research centre for the study of deliberative democracy" Website of the Deliberative Democracy Lab at Stanford University, which focuses on researching, advising on and conducting deliberative polling Website of Journal of Deliberative Democracy, which synthesizes the research, opinions, projects, experiments and experiences of academics and practitioners in an open-access journal Website of The National Coalition for Dialogue and Deliberation, which is a hub that connects people to each other and to thousands of resources like best-practices on public engagement and conflict resolution Podcast series by the newDemocracy Foundation on deliberative democracy among other initiatives Direct democracy Political theories Deliberative groups Group decision-making Communication Participatory democracy Types of democracy
0.760172
0.993472
0.75521
Case method
The case method is a teaching approach that uses decision-forcing cases to put students in the role of people who were faced with difficult decisions at some point in the past. It developed during the course of the twentieth-century from its origins in the casebook method of teaching law pioneered by Harvard legal scholar Christopher C. Langdell. In sharp contrast to many other teaching methods, the case method requires that instructors refrain from providing their own opinions about the decisions in question. Rather, the chief task of instructors who use the case method is asking students to devise, describe, and defend solutions to the problems presented by each case. Comparison with the casebook method of teaching law The case method evolved from the casebook method, a mode of teaching based on Socratic principles pioneered at Harvard Law School by Christopher C. Langdell. Like the casebook method the case method calls upon students to take on the role of an actual person faced with a difficult problem. Decision-forcing cases A decision-forcing case is a kind of decision game. Like any other kinds of decision games, a decision-forcing case puts students in a role of person faced with a problem (often called the "protagonist") and asks them to devise, defend, discuss, and refine solutions to that problem. However, in sharp contrast to decision games that contain fictional elements, decision-forcing cases are based entirely upon reliable descriptions of real events. A decision-forcing case is also a kind of case study. That is, it is an examination of an incident that took place at some time in the past. However, in contrast to a retrospective case study, which provides a complete description of the events in question, a decision-forcing case is based upon an "interrupted narrative." This is an account that stops whenever the protagonist finds himself faced with an important decision. In other words, while retrospective case studies ask students to analyze past decisions with the aid of hindsight, decision-forcing cases ask students to engage problems prospectively. Criticisms of decision-forcing cases In recent years, following corporate scandals and the global financial crisis, the case method has been criticized for contributing to a narrow, instrumental, amoral, managerial perspective on business where making decisions which maximise profit is all that matters, ignoring the social responsibilities of organisations. It is argued that the case method puts too much emphasis on taking action and not enough on thoughtful reflection to see things from different perspectives. It has been suggested that different approaches to case writing, that do not put students in the ‘shoes’ of a manager, be encouraged to address these concerns. Role play Every decision-forcing case has a protagonist, the historical person who was faced with the problem or problem that students are asked to solve. Thus, in engaging these problems, students necessarily engage in some degree of role play. Some case teachers, such as those of the Marine Corps University, place a great deal of emphasis on role play, to the point of addressing each student with the name and titles of the protagonist of the case. (A student playing the role of a king, for example, is asked "Your Majesty, what are your orders?") Other case teachers, such as those at the Harvard Business School, place less emphasis on role play, asking students "what would you do if you were the protagonist of the case." Historical solution After discussing student solutions to the problem at the heart of a decision-forcing case, a case teacher will often provide a description of the historical solution, that is, the decision made by the protagonist of the case. Also known as "the rest of the story", "the epilogue", or (particularly at Harvard University) "the 'B' case", the description of the historical solution can take the form of a printed article, a video, a slide presentation, a short lecture, or even an appearance by the protagonist. Whatever the form of the description of the historical solution, the case teacher must take care to avoid giving the impression that the historical solution is the "right answer." Rather, he should point out that the historical solution to the problem serves primarily to provide students with a baseline to which they can compare their own solutions. Some case teachers will refrain from providing the historical solution to students. One reason for not providing the historical solution is to encourage students to do their own research about the outcome of the case. Another is to encourage students to think about the decision after the end of the class discussion. "Analytic and problem-solving learning," writes Kirsten Lundgren of Columbia University, "can be all the more powerful when the 'what happened' is left unanswered. Complex cases A classic decision-forcing case asks students to solve a single problem faced by a single protagonist at a particular time. There are, however, decision-forcing cases in which students play the role of a single protagonist who is faced with a series of problems, two or more protagonists dealing with the same problem, or two or more protagonists dealing with two or more related problems. Decision-forcing staff rides A decision-forcing case conducted in the place where the historical decisions at the heart of the case were made is called a "decision-forcing staff ride." Also known as an "on-site decision-forcing case", a decision-forcing staff ride should not be confused with the two very different exercises that are also known as "staff rides": retrospective battlefield tours of the type practiced by the United States Army in the twentieth century and the on-site contingency planning exercises (Stabs Reisen, literally "staff journeys") introduced by Gerhard von Scharnhorst in 1801 and made famous by the elder Hellmuth von Moltke in the middle years of the nineteenth century. To avoid confusion between "decision-forcing staff rides" and staff rides of other sorts, the Case Method Project at the Marine Corps University in Quantico, Virginia, adopted the term "Russell Ride" to describe the decision-forcing staff rides that it conducts. The term is an homage to Major General John Henry Russell Jr.,USMC, the 16th Commandant of the United States Marine Corps and an avid supporter of the applicatory method of instruction. Sandwich metaphors Decision-forcing cases are sometimes described with a system of metaphors that compares them to various types of sandwiches. In this system, pieces of bread serve as a metaphor for narrative elements (i.e. the start, continuation, or end of an account) and filling of the sandwich serves as a metaphor for a problem that students are asked to solve. A decision-forcing case in which one protagonist is faced with two problems is thus a "triple-decker case." (The bottom piece of bread is the background to the first problem, the second piece of bread is both the historical solution to the first problem and the background to the second problem, and the third piece of bread is the historical solution to the second problem.) Similarly, a decision-forcing case for which the historical solution is not provided (and is thus a case with but one narrative element) is an "open-face" or "smørrebrød" case. A decision-forcing case in which students are asked to play the role of a decision-maker who is faced with a series of decisions in a relatively short period of time is sometimes called a "White Castle", "slider" case. or "day in the life" case. Case materials Case materials are any materials that are used to inform the decisions made by students in the course of a decision-forcing case. Commonly used case materials include articles that were composed for the explicit purpose of informing case discussion, secondary works initially produced for other purposes, historical documents, artifacts, video programs, and audio programs. Case materials are made available to students at a variety times in the course of a decision-forcing case. Materials that provide background are distributed at, or before, the beginning of the class meeting. Materials that describe the solution arrived at by the protagonist and the results of that solution are passed out at, or after, the end of the class meeting. (These are called "the B-case", "the rest of the story", or "the reveal.") Materials that provide information that became available to the protagonist in the course of solving the problem are given to students in the course of a class meeting. (These are often referred to as "handouts.") Case materials may be either "refined" or "raw." Refined case materials are secondary works that were composed expressly for use as part of decision-forcing cases. (Most of the case materials that are available from case clearing houses and academic publishers are of the refined variety.) Raw case materials are those that were initially produced for reasons other than the informing of a case discussion. These include newspaper articles, video and audio news reports, historical documents, memoirs, interviews, and artifacts. Published case materials A number of organizations, to include case clearing houses, academic publishers, and professional schools, publish case materials. These organizations include: Blavatnik School of Government Harvard Business School Stanford Graduate School of Business Columbia Business School IESE Business School INSEAD ICFAI Business School Hyderabad Ivey Business School Indian School of Business Indian Institute of Management, Ahmedabad Darden School of Business at the University of Virginia Nagoya University of Commerce & Business Asian Institute of Management Asian Case Research Centre at the University of Hong Kong Globalens at the University of Michigan Centre for Management Practice at Singapore Management University The Case Centre The Case Centre (formerly the European Case Clearing House), headquartered in Cranfield University, Cranfield, Bedford, United Kingdom, and with its US office at Babson College, Wellesley, Massachusetts, is the independent home of the case method. It is a membership-based organization with more than 500 members worldwide, not-for-profit organisation and registered charity founded in 1973. The Case Centre is the world’s largest and most diverse repository of case studies used in Management Education, with cases from the world’s top case publishing schools, including, Harvard Business School, ICFAI Business School Hyderabad, the Blavatnik School of Government, INSEAD, IMD, Ivey Business School, Darden School of Business, London Business School, Singapore Management University etc. Its stated aim is to promote the case method by sharing knowledge, skills, and expertise in this area among teachers and students, and for this it engages in various activities like conducting case method workshops, offering case scholarships, publishing a journal, and organizing a global case method awards. The Case Centre Awards (known as the European Awards from 1991 and 2010) recognises outstanding case writers and teachers worldwide. These prestigious awards, popularly known as the case method community's annual 'Oscars', or the “business education Oscars, celebrate worldwide excellence in case writing and teaching. The narrative fallacy The presentation of a decision-forcing case necessarily takes the form of a story in which the protagonist is faced with a difficult problem. This can lead to "the narrative fallacy", a mistake that leads both case teachers and the developers of case materials to ignore information that, while important to the decision that students will be asked to make, complicates the telling of the story. This, in turn, can create a situation in which, rather than engaging the problem at the heart of the case, students "parse the case materials." That is, they make decisions on the basis of the literary structure of the case materials rather than the underlying reality. Techniques for avoiding the narrative fallacy include the avoidance of standard formats for case materials; awareness of tropes and clichés; the use of case materials originally created for purposes other than case teaching; and the deliberate inclusion of "distractors" – information that is misleading, irrelevant, or at odds with other information presented in the case. Purpose of the case method The case method gives students the ability to quickly make sense of a complex problem, rapidly arrive at a reasonable solution, and communicate that solution to others in a succinct and effective manner. In the course of doing this, the case method also accomplishes a number of other things, each of which is valuable in its own right. By exciting the interest of students, the case method fosters interest in professional matters. By placing such things in a lively context, the case method facilitates the learning of facts, nomenclature, conventions, techniques, and procedures. By providing both a forum for discussion and concrete topics to discuss, the case method encourages professional dialogue. By providing challenging practice in the art of decision-making, the case method refines professional judgement. By asking difficult questions, the case method empowers students to reflect upon the peculiar demands of their profession. In his classic essay on the case method ("Because Wisdom Can't Be Told"), Charles I. Gragg of the Harvard Business School argued that "the case system, properly used, initiates students into the ways of independent thought and responsible judgement." Incompatible objectives While the case method can be used to accomplish a wide variety of goals, certain objectives are at odds with its nature as an exercise in professional judgement. These incompatible objectives include attempts to use decision-forcing cases to: provide an example to be emulated paint a particular person as a hero or a villain encourage (or discourage) a particular type of behavior illustrate a pre-existing theory Thomas W. Shreeve, who uses the case method to teach people in the field of military intelligence, argues that "Cases are not meant to illustrate either the effective or the ineffective handling of administrative, operational, logistic, ethical, or other problems, and the characters in cases should not be portrayed either as paragons of virtue or as archvillains. The instructor/casewriter must be careful not to tell the students what to think—they are not empty vessels waiting to be filled with wisdom. With this method of teaching, a major share of the responsibility for thinking critically about the issues under discussion is shifted to the students, where it belongs." Disclaimers Case materials are often emblazoned with a disclaimer that warns both teachers and students to avoid the didactic, hortatory, and "best practices" fallacies. Here are some examples of such disclaimers: This case is intended to serve as the basis for class discussion rather than to illustrate either the effective or ineffective handling of a situation.This decision-forcing case is an exercise designed to foster empathy, creativity, a bias for action, and other martial virtues. As such, it makes no argument for the effectiveness of any particular course of action, technique, procedure, or convention. This case is intended to serve as the basis for class discussion rather than to illustrate either the effective or ineffective handling of a situation. Its purpose is to put the student in the shoes of the decision-maker in order to gain a fuller understanding of the situations and the decisions made.'' Use of the case method in professional schools The case method is used in a variety of professional schools. These include the: Harvard Business School IESE Business School Columbia Business School Singapore Management University Blavatnik School of Government INCAE Business School ICFAI Business School Hyderabad The Acton School of Business Hogeschool van Amsterdam Asian Institute of Management Indian Institute of Management, Ahmedabad Richard Ivey School of Business John F. Kennedy School of Government at Harvard University NUCB Business School at the Nagoya University of Commerce & Business Darden School of Business at the University of Virginia Columbia School of Journalism Mailman School of Public Health, Columbia University School of International and Public Affairs, Columbia University Yale School of Management Marine Corps University Cranfield School of Management School of Advertising & Public Relations, University of Texas Suleman Dawood School of Business at the Lahore University of Management Sciences Institute of Business Administration, Karachi Michael G. Foster School of Business Institute for Financial Management and Research Institute of Chartered Accountants in England and Wales University of Fujairah- MBA Program INALDE Business School in Bogota, Colombia See also Business schools Case competition Case study Casebook method (used by law schools) Decision game European Case Clearing House Experiential learning Harvard Business Publishing Teaching method References Literature Learning methods Management education Military education and training
0.772029
0.978182
0.755185
Anaphora (rhetoric)
In rhetoric, an anaphora (, "carrying back") is a rhetorical device that consists of repeating a sequence of words at the beginnings of neighboring clauses, thereby lending them emphasis. In contrast, an epistrophe (or epiphora) is repeating words at the clauses' ends. The combination of anaphora and epistrophe results in symploce. Functions Other than the function of emphasizing ideas, the use of anaphora as a rhetorical device adds rhythm to a word as well as making it more pleasurable to read and easier to remember. Anaphora is repetition at the beginning of a sentence to create emphasis. Anaphora serves the purpose of delivering an artistic effect to a passage. It is also used to appeal to the emotions of the audience in order to persuade, inspire, motivate and encourage them. In Dr. Martin Luther King Jr.'s famous "I Have a Dream" speech, he uses anaphora by repeating "I have a dream" eight times throughout the speech. Usage Today, anaphora is seen in many different contexts, including songs, movies, television, political speeches, poetry, and prose. Examples See also Epistrophe Epanalepsis Figures of speech involving repetition Ubi sunt Notes References External links What is Anaphora?: Oregon State Guide to English Literary Terms Audio illustrations of anaphora Anaphora Define Anaphora at Dictionary.com Video example of the anaphora Figures of speech Poetic devices Literary terminology
0.760553
0.9929
0.755153
Quadruple and quintuple innovation helix framework
The quadruple and quintuple innovation helix framework describes university-industry-government-public-environment interactions within a knowledge economy. In innovation helix framework theory, first developed by Henry Etzkowitz and Loet Leydesdorff and used in innovation economics and theories of knowledge, such as the knowledge society and the knowledge economy, each sector is represented by a circle (helix), with overlapping showing interactions. The quadruple and quintuple innovation helix framework was co-developed by Elias G. Carayannis and David F.J. Campbell, with the quadruple helix being described in 2009 and the quintuple helix in 2010. Various authors were exploring the concept of a quadruple helix extension to the triple helix model of innovation around the same time. The Carayannis and Campbell quadruple helix model incorporates the public via the concept of a 'media-based democracy', which emphasizes that when the political system (government) is developing innovation policy to develop the economy, it must adequately communicate its innovation policy with the public and civil society via the media to obtain public support for new strategies or policies. In the case of industry involved in R&D, the framework emphasizes that companies' public relations strategies have to negotiate ‘reality construction’ by the media. The quadruple and quintuple helix framework can be described in terms of the models of knowledge that it extends and by five subsystems (helices) that it incorporates; in a quintuple helix-driven model, knowledge and know-how are created and transformed, and circulate as inputs and outputs in a way that affects the natural environment. Socio-ecological interactions via the quadruple and quintuple helices can be utilized to define opportunities for the knowledge society and knowledge economy, such as innovation to address sustainable development, including climate change. Conceptual interrelationship of models of knowledge The framework involves the extension of previous models of knowledge, specifically mode 1, mode 2, the triple helix, and mode 3, by adding the public and the environment: Mode 1. Mode 1 was theorized by Michael Gibbons and is an elderly linear model of fundamental university research where success is defined as "a quality or excellence that is approved by hierarchically established peers” and does not necessarily contribute to industry or the knowledge economy. Mode 2. Mode 2 was also theorized by Michael Gibbons and is context-driven, problem-focused and interdisciplinary research characterized by the following five principles: (1) knowledge produced in the context of application; (2) transdisciplinarity; (3) heterogeneity and organizational diversity; (4) social accountability and reflexivity; (5) and quality control. The Triple Helix model of innovation. The triple helix was first suggested by Henry Etzkowitz and Loet Leydesdorff in 1995 and emphasizes trilateral networks and hybrid organizations of university-industry-government relations to provide the infrastructure necessary for innovation and economic development; it provides a structural explanation for the historical evolution of mode 2 in relation to mode 1. Mode 3. Mode 3 was developed by Elias G. Carayannis and David F.J. Campbell in 2006. Mode 3 emphasizes the coexistence and co-development of diverse knowledge and innovation modes, together with mutual cross-learning between knowledge modes and interdisciplinary and transdisciplinary knowledge. Quadruple helix. The quadruple helix adds as fourth helix the public, specifically defined as the culture- and media-based public and civil society. This fourth helix includes, for example, sociological concepts like art, the creative industries, culture, lifestyles, media, and values. Quintuple helix. The quintuple helix adds as fifth helix the natural environment, more specifically socio-ecological interactions, meaning it can be applied in an interdisciplinary and transdisciplinary way to sustainable development. The five helices The main constituent element of the helical system is knowledge, which, through a circulation between societal subsystems, changes to innovation and know-how in a society (knowledge society) and for the economy (knowledge economy). The quintuple helix visualizes the collective interaction and exchange of this knowledge in a state by means of five subsystems (helices): (1) education system, (2) economic system, (3) natural environment, (4) media-based and culture-based public (also ‘civil society’), (5) and the political system. Each of the five helices has an asset at its disposal, with a societal and scientific relevance, i.e., human capital, economic capital, natural capital, social capital and capital of information capital, and political capital and legal capital, respectively. Quadruple and quintuple helix and policy making The quadruple helix has been applied to European Union-sponsored projects and policies, including the EU-MACS (EUropean MArket for Climate Services) project, a follow-up project of the European Research and Innovation Roadmap for Climate Services, and the European Commission's Open Innovation 2.0 (OI2) policy for a digital single market that supports open innovation. Quadruple and quintuple helix in academic research The quadruple helix has implications for smart co-evolution of regional innovation and institutional arrangements, i.e., regional innovation systems. The quintuple helix has been applied to the quality of democracy, including in innovation systems; international cooperation; forest-based bioeconomies; the Russian Arctic zone energy shelf; regional ecosystems; smart specialization and living labs; climate change, and sustainable development, as well as to innovation diplomacy, a quintuple-helix based extension of science diplomacy. Criticism of the concept How to define the new sectors of the public and the environment with regard to the standard triple helix model of innovation has been debated, and some researchers see them as additional sectors while others see them as different types of overarching sectors which contain the previous sectors. See also Innovation economics Innovation system Knowledge economy Knowledge production modes Knowledge society Triple helix model of innovation References Innovation economics
0.767376
0.984044
0.755131
Context (linguistics)
In semiotics, linguistics, sociology and anthropology, context refers to those objects or entities which surround a focal event, in these disciplines typically a communicative event, of some kind. Context is "a frame that surrounds the event and provides resources for its appropriate interpretation". It is thus a relative concept, only definable with respect to some focal event within a frame, not independently of that frame. In linguistics In the 19th century, it was debated whether the most fundamental principle in language was contextuality or compositionality, and compositionality was usually preferred. Verbal context refers to the text or speech surrounding an expression (word, sentence, or speech act). Verbal context influences the way an expression is understood; hence the norm of not citing people out of context. Since much contemporary linguistics takes texts, discourses, or conversations as the object of analysis, the modern study of verbal context takes place in terms of the analysis of discourse structures and their mutual relationships, for instance the coherence relation between sentences. Neurolinguistic analysis of context has shown that the interaction between interlocutors defined as parsers creates a reaction in the brain that reflects predictive and interpretative reactions. It can be said then that mutual knowledge, co-text, genre, speakers, hearers create a neurolinguistic composition of context. Traditionally, in sociolinguistics, social contexts were defined in terms of objective social variables, such as those of class, gender, age or race. More recently, social contexts tend to be defined in terms of the social identity being construed and displayed in text and talk by language users. The influence of context parameters on language use or discourse is usually studied in terms of language variation, style or register (see Stylistics). The basic assumption here is that language users adapt the properties of their language use (such as intonation, lexical choice, syntax, and other aspects of formulation) to the current communicative situation. In this sense, language use or discourse may be called more or less 'appropriate' in a given context. In linguistic anthropology In the theory of sign phenomena, adapted from that of Charles Sanders Peirce, which forms the basis for much contemporary work in linguistic anthropology, the concept of context is integral to the definition of the index, one of the three classes of signs comprising Peirce's second trichotomy. An index is a sign which signifies by virtue of "pointing to" some component in its context, or in other words an indexical sign is related to its object by virtue of their co-occurrence within some kind of contextual frame. In natural language processing In word-sense disambiguation, the meanings of words are inferred from the context where they occur. Contextual variables Communicative systems presuppose contexts that are structured in terms of particular physical and communicative dimensions, for instance time, location, and communicative role. See also Aberrant decoding Context principle Context-sensitive language Conversational scoreboard Deixis Opaque context References Further reading For a review of the history of the principle of contextuality in linguistics, see Scholtz, Oliver Robert (1999) Verstehen und Rationalität: Untersuchungen zu den Grundlagen von Hermeneutik und Sprachphilosophie Sociolinguistics Discourse analysis Pragmatics
0.767538
0.983834
0.75513
Tiqqun
Tiqqun was a French-Italian Post-Marxist anarchist philosophical journal or zine, produced in two issues from 1999 to 2001. Topics treated in the journal's articles include anti-capitalism, anti-statism, Situationism, feminism, and the history of late 20th century revolutionary movements, especially May 1968 in France, the Italian Years of Lead, and the Anti-globalization protests of the late 1990s and early 2000s. The journal's articles were written anonymously; as a result, the word "Tiqqun" is also used to name the articles' collective of authors, and other texts attributed to them. The journal came to wider attention following the Tarnac Nine arrests of 2008, a police operation which detained nine people on suspicion of having conspired on recent sabotage of French electrical train lines. The arrested were accused of having written The Coming Insurrection, a political tract credited to The Invisible Committee, a distinct anonymous group named in the journal. Julien Coupat, one of the arrested, was a contributor to the first issue of Tiqqun. The journal's articles are polemics against modern capitalist society, which the authors hold in contempt. Individual articles present diagnoses of specific aspects of modern society, drawing on ideas from continental philosophy, anthropology, and history. Guy Debord's concept of the Spectacle is used to explain how communication media and socialization processes support existing capitalist society, and Michel Foucault's concept of biopower is used to explain how states and businesses manage populations via their physical needs. The journal's articles introduce terminology for their topics, freely used throughout the other articles. A "Bloom" refers to an archetypal, alienated modern person or subject, named after the character Leopold Bloom from the James Joyce novel Ulysses. A "Young-Girl" refers to a person who participates in modern society and thereby reinforces it, exhibiting traits commonly associated with femininity. Although a "Bloom" frequently stands for a man and a "Young-Girl" frequently stands for a woman, the authors stress that the concepts are not gendered. The word Tiqqun is an alternate spelling of Tikkun olam, a Jewish theological concept which refers to repair or healing of the world. In the authors' context, Tiqqun refers to improvement of the human condition through the subversion of modern capitalist society. Due to their philosophical influences, political content and historical context, the Tiqqun articles have received some attention in humanities scholarship and anarchist reading circles. Selected articles have been republished in several languages. Contents and authorship The first issue of Tiqqun was published in February 1999 with the title (Tiqqun, Conscious Organ of the Imaginary Party: Exercises in Critical Metaphysics). The second issue was published in October 2001 with the title (Tiqqun, Organ of Liaison within the Imaginary Party: Zone of Offensive Opacity). For simplicity the two issues are commonly referred to as Tiqqun 1 and Tiqqun 2, respectively. Eleven articles were published in Tiqqun 1, and ten major articles were published in Tiqqun 2. Additionally the first issue contained a one-page spread, and the second issue contained nine smaller pieces interspersed between each of its ten main articles, two-page spreads with black borders. In all 31 pieces were published in the journal, listed below in the order they originally appeared. Due to their anonymity, Tiqqun's articles are not credited to individual authors; rather, they are simply attributed to the journal's namesake. However the first issue's back cover contained a colophon which listed the issue's editorial board as Julien Boudart, Fulvia Carnevale, Julien Coupat, Junius Frey, Joël Gayraud, Stephan Hottner and Rémy Ricordeau. The actor and philosopher Mehdi Belhaj Kacem briefly collaborated with the Tiqqun collective toward the end of its existence. In an interview, he noted that the group disbanded shortly after the September 11 attacks. Themes Tiqqun's articles pathologize modern capitalist society, introducing several terms used to describe social phenomena. The authors use the terms together to present an anti-capitalist, anti-statist worldview. Because of their contempt for modern society, the authors advocate insurrectionary anarchism, crime, and other methods intended to subvert it. The authors also indicate that people opposed to modern capitalist society may form meaningful community with each other based on their shared rejection of it. According to the authors, the coordination of states and private businesses gives rise to modern capitalist society (Empire), which entails "commodity domination" of social interactions, supplanting authentic human community. This leads to several pathological sociological types: socially alienated people (Blooms), people who fully participate in society and thereby become commodities themselves (Young-Girls), people who criticize society without attempting to change it (Men of the Old Regime), and subcultures which seek to preserve themselves at the expense of their members' inability to be honest with each other (Terrible Communities). Historically, modern Western society transitioned from a period of liberal governance (the liberal hypothesis) to a period stressing social control using technology (the cybernetic hypothesis). Modern society uses two techniques to maintain its power and to reproduce itself: biopower is used to manage the physical needs of the population, while the Spectacle is an established form of discourse which reproduces modern society through its socialization in individuals. Against this, the authors posit "critical metaphysics", an attitude which rejects modern society. Persons who reject modern society may meet in "planes of consistency", circumstances which allow like-minded people to encounter each other. Persons rejecting modern society form the Imaginary Party, an unorganized group who may coalesce around specific events of civil unrest. An example is the Black bloc, a practice—employed during anti-globalization protests and riots—of dressing in black and wearing face coverings. The authors describe "zones of offensive opacity" as places where people may meet to subvert modern society. The process through which such people meet and interact is described as Tiqqun. The tone of the articles is frequently acerbic and sarcastic. The philosophers Thucydides, Thomas Hobbes and Martin Heidegger are described respectively as "that moron", "that piece of shit" and "swine", due to the authors' disagreements with their views. The Italian sociologist Antonio Negri is also frequently the subject of harsh criticism, due to his involvement in activism which the authors feel is too conciliatory to existing capitalist society. The articles are illustrated with reproductions of artwork and photography of riots and demonstrations. Synopsis Tiqqun 1 The journal's first issue included a frontispiece depicting a traditional Italian mask set against the Latin inscription SUA CUIQUE PERSONA (To each their own mask); masks are used frequently as metaphorical devices throughout the issue. The frontispiece is a detail of Portrait Cover with Grotesques [it], an Italian Renaissance painting of uncertain origin, commonly attributed to Ridolfo Ghirlandaio. The painting functioned as a practical art object, intended as a cover for a portrait painting. Although its companion is also uncertain, Portrait Cover has become associated with the portrait Veiled Woman [it], also attributed to Ghirlandaio. The two artworks are exhibited together at the Uffizi Gallery in Florence. Of course you know, this means war! is a brief opening piece which sets out the authors' disgust with modern society, which they liken to the Situationist notion of the Spectacle, and also to the Kabbalistic notion of qlippoth (shells, husks), the latter being evil forces in Jewish mysticism. Against the prevailing social order, the authors propose Tiqqun, referring both to the Jewish concept of healing, and also to the journal itself. The piece is dated Venice, January 15, 1999. What is Critical Metaphysics? gives a description of its titular subject, which is opposed to "commodity domination", or commodity metaphysics. The article's title is a play on What is Metaphysics?, a lecture given by Heidegger in 1929. According to the authors, critical metaphysics is an irrepressable, anti-capitalist way of perceiving reality, which consumer culture, modernity and analytic philosophy have failed to eliminate. The authors stress that the concept is not academic, but practical: "Critical Metaphysics is in everyone's guts." Persons who engage in critical metaphysics are described as critical metaphysicians. In one passage, people who join to "politicize metaphysics" represent the emergence of "the coming insurrection of the Mind"; The Coming Insurrection was the title later given to the first work by The Invisible Committee. Between articles an image of a black square was reproduced, taken from a work by the occult philosopher Robert Fludd. For Fludd, the black square represented the Void which preceded the Creation. The authors reproduced the image to illustrate surrounding themes of nothingness and night. Beginning with a quotation from the James Joyce novel Ulysses, Theory of Bloom describes a phenomenon in which people become alienated from each other as a consequence of living in capitalist society. Although the term "Bloom" is used contextually throughout the issue to refer to an alienated modern subject, the authors explicitly deny this characterization as reductive, instead describing Bloom as a Stimmung, or a certain "mood" of personality. According to the authors, a Bloom is "foreign to himself" in the sense that capitalist society denies him the ability to be his authentic self. Bloom is thus a kind of "mask", recalling the issue's frontispiece. Modern society therefore encourages superficial identification with various predicate labels (being a woman, being gay, being British, etc.), which the authors refer to as "poor substantiality", in an effort to prevent the socially harmful consequences of an isolated population. Since a Bloom is alienated from the modern society held in contempt by the authors, they consider that his rejection of that society can lead to violence, expressed in the murders committed by Mitchell Johnson and Kipland Kinkel, among others. Phenomenology of Everyday Life is a brief piece in which the narrator describes an "absurd" interaction with a bakery clerk, where each is expected to play the economic roles of customer and vendor. Theses on the Imaginary Party describes its title's subject as a portion of humanity who come to reject modern society. Spectacle and biopower are presented as two reinforcing aspects of modernity: the former is a control mechanism which ensures compliance with and reproduction of the society's norms, while the latter presents itself as a benevolent force providing for the needs of the population. Against these, "agents" of the Imaginary Party commit acts which are pathologized by the society as antisocial and irrational, including rioting and mass shootings. According to the authors, Blooms are prone to become members of the Imaginary Party because of their alienation from modern society. Johann Georg Elser, failed assassin of Adolf Hitler, is described by the authors as a "model Bloom", due to his modest life. Silence and Beyond is a piece which describes the threatening power of silence when wielded by a group of rioters. The article describes the 1998 suicide of Edoardo Massari, an Italian anarchist who was jailed in Turin on suspicion of eco-terrorism against construction sites for the Italian TGV high-speed train. In response, anarchist rioters silently marched through Turin over the next several days, brandishing weapons, damaging property, and assaulting journalists. The authors praised the rioters' tactics because by refusing to make demands or to communicate in conventional ways, the rioters frustrated commentators who insisted on dialogue, instead expressing their opposition to existing society using violent direct action. On the Economy Considered as Black Magic is a criticism of modern economics. The authors reject the economic property of fungibility as dehumanizing, since it leads to the fungibility of human beings themselves; Blooms are "absolutely equivalent" with each other (as potential employees, members of society, etc.) and therefore adopt superficial traits in an effort to present individual personalities. The authors also reject what they describe as the ahistorical retconning of modern economic theory onto all human history. As a counterexample, they cite the gift economy of the Kula ring in Papua New Guinea, as described in Bronisław Malinowski's Argonauts of the Western Pacific. Preliminary Materials For a Theory of the Young-Girl describes "the Young-Girl" as a social archetype related to young women's femininity in modern capitalist society. The article consists of a series of glosses, including declarative statements on the characteristics of the Young-Girl and phrases taken from women's magazines. The archetype is complementary to Bloom: whereas Bloom is an alienated subject who threatens to harm capitalist society, the Young-Girl fully participates in, is a commodity of, and defends that society. Although the article focuses on traits and language associated with femininity, it also stresses that men can function as Young-Girls in society by participating in and upholding it, while also taking care to uphold their public image out of vanity. A quotation attributed to Silvio Berlusconi describes him as a male Young-Girl: "They've wounded me in what is most dear to me: my image." Building on these themes, Machine-Men: User's Guide is a feminist piece which discusses prescription drugs—especially Viagra—as a form of biopolitical technology. The Critical Metaphysicians beneath the "Unemployed Persons' movement" is a brief article describing an "unemployed workers'" movement in France during 1997 and 1998, including reproductions of related protest flyers. A Few Scandalous Actions of the Imaginary Party is a series of vignettes recounting situations instigated by the authors and their associates; the piece ends with a satirical mockery of the novelist Michel Houellebecq, an object of scorn for the authors. Tiqqun 2 Introduction to Civil War expands the concept of civil war to become a philosophical category explaining human interactions. According to the authors, individuals have various inclinations, which are forms of life. Because humans have differing inclinations and share the same world, they exist in a state of civil war with each other—their conflicts are not those of states in conventional warfare, and although not necessary, the possibility of violence is never excluded. States and modern society developed as mechanisms which sought to neutralize the natural state of civil war; against this, the authors propose natural civil war as the preferable state for humanity, which they liken to Tiqqun. The Cybernetic Hypothesis describes the rise of cybernetics in the years following World War II, a modern paradigm of control mechanisms which supplanted the "liberal hypothesis". The liberal hypothesis refers to the dominance of liberalism—and the ideal of rational self-interest—from the early 19th through the early 20th centuries, until societal control was sought for its own sake. Cybernetics was developed with military applications: Norbert Wiener developed an automated, predictive anti-aircraft system, and the need to develop a decentered communication system in the event of nuclear war gave rise to the internet. Since cybernetic systems seek control and equilibrium, the authors advocate their defeat by creating unmanagable situations. The article was illustrated by various images depicting technology, including works by H.R. Giger suggesting its disturbing aspects. Theses on the Terrible Community describes pathological communities which arise in modern society. Although such communities may include countercultures, they can also include mainstream communities, such as modern corporations. Terrible communities seek to preserve themselves at the cost of the inability of their members to speak honestly with each other, referred to in the article as parrhesia. The authors seek to replace terrible communities with authentic communities whose members can be honest with each other. The Problem of the Head is a criticism of avant-garde groups in revolutionary politics and the arts, which present themselves as the "heads" of their corresponding movements. "The Problem of the Head" also refers to the questions of societal leadership (a king, a president, a business, etc.), and the form of leadership in a society (monarchy, democracy, oligarchy, etc.). The authors claim that the liberal hypothesis was a previous answer to the problem, eventually replaced by the cybernetic hypothesis. The inter-war period from 1914 to 1945 was a time of instability, and this is what allowed avant-garde movements—such as Surrealism and Bolshevism—to flourish. However, avant-garde movements tend to become preoccupied with their own culture and internal issues, to the detriment of the broader issues that they claim to represent, and are therefore likened to terrible communities. A note indicates that in June 2000, the piece was read aloud at a retrospective exhibit of modern art in Venice, upsetting two of the participating artists. "A critical metaphysics could emerge as a science of apparatuses..." is presented as the founding text of the SACS, the Society for the Advancement of Criminal Science. The authors describe modern society as a series of control mechanisms, or apparatuses; examples include highways, store security, and turnstiles. For the authors, the "science of apparatuses" is thus simply the science of crime, techniques for circumventing and defeating control apparatuses. The authors therefore promote the collection and dissemination of criminal techniques intended to undermine capitalist society. Report to the S.A.C.S. Concerning an Imperial Apparatus is a critical account of Bluewater, a shopping mall outside London, newly completed at the time of writing. The authors compare the modern shopping mall to 19th-century historical precursors, including the French arcades and The Crystal Palace. They also describe shopping malls using terms taken from the Project on the City, a book series on urban planning just mentioned by name—and derided—in the previous article. The article describes the 1956 opening of the Southdale Center—the world's first enclosed, air-conditioned shopping mall—air conditioning itself, and artificial plants, referred to as "Replascape". Articles on all three topics appear in the volumes Mutations and Harvard Design School Guide to Shopping. The Little Game of the Man of the Old Regime is a critique of a conservative social archetype, described as an older person who withdraws from society while criticizing it, presenting themselves as above or outside social conflict. The authors describe the Man of the Old Regime as a specific type of Bloom whose critique of society is impotent; they also attribute several negative traits to the archetype, including false consciousness. In the original issue the article was followed by You're Never Too Old to Ditch Out, a small piece which encouraged older people to withdraw from mainstream society and instead seek authentic community with others, as opposed to isolation. Sonogram of a Potential is a feminist article treating sonograms, abortion, and women's history during the Italian Years of Lead in the 1970s. The authors use the Herman Melville shory story "Bartleby, the Scrivener" and its protagonist's phrase "I would prefer not to" as devices to explore the concepts of general strike and sex strikes. This Is Not a Program describes the Years of Lead in detail, contrasting the Italian "Creeping May" of 1977 with the French protests of May 1968. During the period of civil unrest, several left wing factions competed with each other and with the Italian state. The established Left consisted of the Italian Communist Party (PCI), labor unions and worker's movements, while a more radical faction—the Autonomists—rejected organizational hierarchy and work itself. In the context of anarchist movements, the authors describe the Imaginary Party as a "plane of consistency" where individuals who seek to subvert modern society can find each other and form alliances. How Is It to Be Done? is a brief lyrical piece summarizing the issue's themes, its title a play on Lenin's work What Is to Be Done?. Referring to the issue's subtitle, the authors seek to inhabit "zones of offensive opacity"—akin to no-go zones—as points from which to begin an assault on modern capitalist society. In order to maintain opacity, the authors encourage the rejection of predicate labels associated with identity politics because authorities might use them to more easily identify individuals. The piece ends with a call to insurrection. Minor pieces Nine minor pieces appeared in Tiqqun 2, which included reproductions of flyers posted publicly, or for dissemination at demonstrations. Final Warning to the Imaginary Party is a sarcastic list of articles concerning the proper use of public space—for leisure and consumption as opposed to protest or "abnormal behavior"—written from the point of view of governments and businesses. The piece was reproduced as photographs of the printed list, posted in public and subsequently defaced and marked with criticisms. The Conquerors had Conquered Without Trouble is a prose vignette describing gatherings of silent, masked people in the world's cities, to the disturbance of the cities' original "conquerors". The old conquerors blamed the phenomenon on an "Invisible Committee", and the piece invoked the phrase used as an author's credit in the later eponymous texts: The Untitled Notes on Citizenship Papers are remarks on a social movement demanding citizen documentation for all persons; the authors observed that such a movement could be tactically useful to abolish the concept of citizenship as such, in the sense that granting citizenship documentation to all persons would defeat the exclusive character of citizenship itself. Progress doesn't want Those that don't want Progress is another sarcastic flyer, admonishing the residents of the Paris suburb Montreuil to accept gentrification and the re-election of mayor Jean-Pierre Brard, or else leave. Stop DomestiCAFion! is a flyer concerning the demeaning aspects of applying for welfare, describing inspections made by social workers with regard to income and social life as intrusive. This was immediately followed by a quotation from Robert Walser describing a flame igniting on a stage during a performance, which the audience initially believed to be part of the show, but which then frightened the performers and finally the audience once they understood the fire to be a real danger. Notes on the Local is a series of remarks on the fragmentation of the built environment into spaces with distinct functions. According to the authors, places like highways, supermarkets and public benches are transient locations that their users are expected to pass through in a timely fashion. To cope with this regimented use of real space, virtual spaces (television, internet, video games) are provided to people to give an illusion of freedom. You're Never Too Old to Ditch Out is a piece which exhorted the elderly to deny capitalist society their further participation in it, instead using their savings for self-reliance and to seek authentic community with others. Hello! is a piece which criticized the activist group ATTAC for what the authors described as its recuperation into conventional capitalist society. Ma noi ci saremo (But We'll Be Here) is a series of remarks on the then-recent anti-globalization protests which occurred in Genoa, Prague, and Seattle. Related texts Other texts not appearing in the original journal have been associated with Tiqqun and the Invisible Committee. The Great Game of Civil War is a brief piece describing the difficulty of leaving modern society, using sarcastic language similar to that found in Tiqqun's flyers and a ten-point format similar to Final Warning to the Imaginary Party. In 2004, the postscript to an Italian edition of Theory of Bloom announced the forthcoming publication of Call (Appel), an anonymous tract which proposed secession from mainstream capitalist society. Call used vocabulary and rhetoric common to both Tiqqun and the Invisible Committee (e.g. Spectacle and biopower, an imperative to form communes). Call was later criticized on the Left for its suggestion that actors can unilaterally withdraw from capitalist society on their own terms. According to the critic, capitalism continues to inform relations of production throughout society, a situation from which potential defectors cannot immediately escape. Reception Tiqqun came to wider attention in the English-speaking world through its association with the Invisible Committee, whose book The Coming Insurrection was denounced (and thereby popularized) by the American conservative commentator Glenn Beck following the Tarnac Nine arrests. Due to its popularization following the arrests, the journal's articles have received attention in humanities scholarship and anarchist reading circles, generating a body of critical literature. Some authors have analyzed the articles' historical background, while others have used them to underline points in original research. Criticisms range from perceived misogyny (in the Young-Girl article) to the commercial success of The Coming Insurrection itself. Jason E. Smith detailed the history of civil unrest in 1970s Italy, providing historical background for Tiqqun's subject matter in This is Not a Program. He underlined the division within the Italian Left between established, labor-focused organizations (including the Italian Communist Party and worker's unions) and more radical, autonomist groups who refused the employment relation altogether, using autoreduction as a coercive tactic to appropriate goods and services at lower prices, including the looting of supermarkets. Smith argued that Tiqqun's articles advocate a politics of incivility, informed by the latter autonomist tendency in the Italian Left. Alexander R. Galloway cited The Cybernetic Hypothesis in an essay treating the conceptual history of the black box, likening black bloc demonstrators to "a black box" in the sense that each has internal dynamics opaque to outisiders. Andrew Culp discussed Michel Foucault's studies on war, politics and insurrection as precursors of Tiqqun's martial discourse; he also described the Invisible Committee as a group which splintered from the personnel involved with creating the journal. In two related articles, Jackie Wang cited The Cybernetic Hypothesis to describe policing as a form of social control. One piece detailed the real example of PredPol, predictive policing software adopted by several American police departments throughout the 2010s; the second was a personal reflection on the fictional example of RoboCop as a cybernetic cop. In response to the emergence of the COVID-19 pandemic and the practice of wearing face masks intended to slow the spread of the disease, Philippe Theophanidis essayed the cultural significance of masks, using Portrait Cover with Grotesques as a device to explore the topic. He noted the painting's invocation in Tiqqun, and also noted with irony that although face coverings had recently been banned in France due to their use by demonstrators and Muslim women as part of niqab, the French government later mandated face coverings in response to the pandemic. Reg Johanson decried an ableist tendency which he observed in both Tiqqun and the Invisible Committee. According to Johanson, both collectives are suspicious of people suffering from serious illness or disabilities because their status renders them dependent on—and necessarily complicit with—the society sustaining their lives, which the authors seek to subvert. He also noted that the collective placed a premium on mobility, suggesting the members' possible youth, wealth, or lack of family life. Preliminary Materials For a Theory of the Young-Girl consists of a series of passages characterizing "the Young-Girl", frequently in sexist terms. Representative examples include "The Young-Girl is a lie, the apogee of which is her face." and "The Young-Girl's ass is a global village." Critics of the text agree that its ostensible purpose is not to insult women, but rather to denounce a capitalist process of socialization which produces "the Young-Girl" as a pathological archetype which is harmful to real women. While acknowledging this premise, Moira Weigel and Mal Ahern criticized the text as misogynistic, suggesting that its anonymity and irony were used as covers to pre-emptively deflect accusations of sexism; Weigel and Ahern's article was itself criticized in later articles. Catherine Driscoll noted that the device of the "Young-Girl" does not suggest the authors' dissatisfaction with society from a woman's point of view, but was instead chosen as one subordinate facet of a larger political philosophical project. Translator Ariana Reines noted that although she later came to appreciate the text, the process of reading and translating it made her sick—not in the metaphorical sense of finding the rhetoric disagreeable, but in the literal sense that she experienced nausea and migranes while preparing her translation. Tiqqun has also been criticized in anarchist reading circles, frequently in connection with the Invisible Committee. One article traced the journal's philosophical influences, focusing on Heidegger, nihilism and the Jewish messianic figures of Sabbatai Zevi and Jacob Frank, ultimately rejecting the journal itself as philosophically insignificant. Another article criticized This is Not a Program, claiming that the latter gave a revisonist account of the Years of Lead. Others instead focused on works by the Invisible Committee (though mentioning Tiqqun in passing), arguing that the former group marketed its books as fashionable consumer products following the Tarnac Nine arrests, contrary to their purported anti-capitalist views. Common to all these articles is the observation that the texts under examination—whether by Tiqqun or the Invisible Committee—have a tendency to contradict themselves; the criticisms also use polemical language comparable to that used in Tiqqun itself. A pair of critical works discussed Tiqqun and the Invisible Committee in more sympathetic terms. Pedro José Mariblanca Corrales treated Tiqqun's concept of Bloom by way of the journal's vocabulary (see the below glossary), elaborating the latter to explain the social causes giving rise to the former. Alden Wood wrote a series of academic articles collected in a single volume, exploring aspects of the two group's writings by reading them together with others. Wood compared the groups' use of musical metaphor with the atonal compositions of Arnold Schoenberg, their invocations of nihilism with George Bataille, and detailed the influence of Heidegger on Tiqqun's project, noted by others. Glossary Tiqqun's articles introduce several items of jargon which are freely used throughout the journal's other articles. Major terms are described here. Notes Bibliography Original French sources Original French edition of Tiqqun 1. Original French edition of Tiqqun 2. English translations Translation of the article which originally appeared in Tiqqun 2. Translation of the titular article, and also of "How Is It to Be Done?", which both originally appeared in Tiqqun 2. Translation of a revised version of the article which originally appeared in Tiqqun 1. Translation of the article which originally appeared in Tiqqun 1. Translation of the titular article, and also of "A critical metaphysics could emerge as a science of apparatuses...", which both originally appeared in Tiqqun 2. Unofficial English translation of Tiqqun 1. References Further reading Ceccaldi, Jérôme. "Rions un peu avec Tiqqun." Multitudes 8 (2002): pp. 239–242. External links The Anarchist Library Contains selected texts from Tiqqun, and also several essays critical of the journal and its association with The Coming Insurrection. bloom0101.org (Archived). Dedicated to the free diffusion of Tiqqun texts, including their translations into several languages. clairefontaine.ws Website of an art collective including Fulvia Carnavale, former Tiqqun contributor. tiqqun.jottit.com (Archived). Work of Tiqqunista, an anonymous translator, providing texts from Tiqqun in English. Autonomism Political philosophy journals
0.764737
0.98743
0.755124
Chicago school (sociology)
The Chicago school (sometimes known as the ecological school) refers to a school of thought in sociology and criminology originating at the University of Chicago whose work was influential in the early 20th century. Conceived in 1892, the Chicago school first rose to international prominence as the epicenter of advanced sociological thought between 1915 and 1935, when their work would be the first major bodies of research to specialize in urban sociology. This was considered the Golden Age of Sociology, with influence on many of today's well known sociologists. Their research into the urban environment of Chicago would also be influential in combining theory and ethnographic fieldwork. Major figures within the first Chicago school included Nels Anderson, Ernest Burgess, Ruth Shonle Cavan, Edward Franklin Frazier, Everett Hughes, Roderick D. McKenzie, George Herbert Mead, Robert E. Park, Walter C. Reckless, Edwin Sutherland, W. I. Thomas, Frederic Thrasher, Louis Wirth, and Florian Znaniecki. The activist, social scientist, and Nobel Peace Prize winner Jane Addams also forged and maintained close ties with some of the members of the school. Following the Second World War, a "second Chicago School" arose, whose members combined symbolic interactionism with methods of field research (today known as ethnography), to create a new body of work. Luminaries from the second Chicago school include, Howard S. Becker, Richard Cloward, Erving Goffman, David Matza, Robert K. Merton, Lloyd Ohlin and Frances Fox Piven. Theory and method The Chicago school is best known for its urban sociology and for the development of the symbolic interactionist approach, notably through the work of Herbert Blumer. It has focused on human behavior as shaped by social structures and physical environmental factors, rather than genetic and personal characteristics. Biologists and anthropologists had accepted the theory of evolution as demonstrating that animals adapt to their environments. As applied to humans who are considered responsible for their own destinies, members of the school believed that the natural environment, which the community inhabits, is a major factor in shaping human behavior, and that the city functions as a microcosm: "In these great cities, where all the passions, all the energies of mankind are released, we are in a position to investigate the process of civilization, as it were, under a microscope." Members of the school have concentrated on the city of Chicago as the object of their study, seeking evidence whether urbanization and increasing social mobility have been the causes of the contemporary social problems. By 1910, the population exceeded two million, many of whom were recent immigrants to the U.S. With a shortage in housing and a lack of regulation in the burgeoning factories, the city's residents experienced homelessness and poor housing, living, and working conditions with low wages, long hours, and excessive pollution. In their analysis of the situation, Thomas and Znaniecki (1918) argued that these immigrants, released from the controls of Europe to the unrestrained competition of the new city, contributed to the city's dynamic growth. Like the person who is born, grows, matures, and dies, the community continues to grow and exhibits properties of all of the individuals who had lived in the community.Ecological studies (among sociologists thus) consisted of making spot maps of Chicago for the place of occurrence of specific behaviors, including alcoholism, homicide, suicides, psychoses, and poverty, and then computing rates based on census data. A visual comparison of the maps could identify the concentration of certain types of behavior in some areas. Correlations of rates by areas were not made until later.For W. I. Thomas, the groups themselves had to reinscribe and reconstruct themselves to prosper. Burgess studied the history of development and concluded that the city had not grown at the edges. Although the presence of Lake Michigan prevented the complete encirclement, he postulated that all major cities would be formed by radial expansion from the center in concentric rings which he described as zones, i.e. the business area in the center; the slum area (aka "the zone in transition") around the central area; the zone of workingmen's homes farther out; the residential area beyond this zone; and then the bungalow section and the commuter's zone on the periphery. Under the influence of Albion Small, the research at the school mined the mass of official data including census reports, housing/welfare records and crime figures, and related the data spatially to different geographical areas of the city. Criminologists Shaw and McKay created statistical maps: spot maps to demonstrate the location of a range of social problems with a primary focus on juvenile delinquency; rate maps which divided the city into block of one square mile and showed the population by age, gender, ethnicity, etc.; zone maps which demonstrated that the major problems were clustered in the city center. Thomas also developed techniques of self-reporting life histories to provide subjective balance to the analysis. Park, Burgess, and McKenzie (1925) are credited with institutionalizing, if not establishing, sociology as a science. They are also criticized for their overly empiricist and idealized approach to the study of society but, in the inter-war years, their attitudes and prejudices were normative. Three broad themes characterized this dynamic period of Chicago studies: Culture contact and conflict: Studies how ethnic groups interact and compete in a process of community succession and institutional transformation. An important part of this work concerned African Americans; the work of E. Franklin Frazier (1932; 1932), as well as of Drake and Cayton (1945), shaped white America's perception of black communities for decades. Succession in community institutions as stakeholders and actors in the ebb and flow of ethnic groups. Cressey (1932) studied the dance hall and commercialized entertainment services; Kincheloe (1938) studied church succession; Janowitz (1952) studied the community press; and Hughes (1979) studied the real-estate board. City politics: Charles Edward Merriam's commitment to practical reform politics was matched by Harold Gosnell (1927) who researched voting and other forms of participation. Gosnell (1935), Wilson (1960), Grimshaw (1992) considered African American politics; and Banfield and Wilson (1963) placed Chicago city politics in a broader context. The school is perhaps best known for the subcultural theories of Thrasher (1927), Frazier (1932; 1932), and Sutherland (1924), and for applying the principles of ecology to develop the social disorganization theory which refers to consequences of the failure of: social institutions or social organizations (including the family, schools, churches, political institutions, policing, business, etc.) in identified communities and/or neighborhoods, or in society at large; and social relationships that traditionally encourage co-operation between people. Thomas defined social disorganization as "the inability of a neighborhood to solve its problems together" which suggested a level of social pathology and personal disorganization, so the term, "differential social organization" was preferred by many, and may have been the source of Sutherland's (1947) differential association theory. The researchers have provided a clear analysis that the city is a place where life is superficial, where people are anonymous, where relationships are transitory and friendship and family bonds are weak. They have observed the weakening of primary social relationships and relate this to a process of social disorganization (comparison with the concept of anomie and the strain theories is instructive). Ecology and social theories Vasishth and Sloane (2000) argue that while it is tempting to draw analogies between organisms in nature and the human condition, the problem lies in reductionism, i.e. that the science of biology is oversimplified into rules that are then applied mechanically to explain the growth and dynamics of human communities. The most fundamental difficulties are definitional: If a community is a group of individuals who inhabit the same place, is the community merely the sum of individuals and their activities, or is it something more than an aggregation of individuals? This is critical in planning research into group interactions. Will research be effective if it focuses on the individuals composing a group, or is the community itself a proper subject of research independently of the individuals who compose it? If the former, then data on individuals will explain the community, but if the community either directly or indirectly affects the behavior of its members, then research must consider the patterns and processes of community as distinct from patterns and processes in populations of individuals. But this requires a definition and distinction between "pattern" and "process". The structures, forms, and patterns are relatively easy to observe and measure, but they are nothing more than evidence of underlying processes and functions which are the real constitutive forces in nature and society. The Chicago school wanted to develop tools by which to research and then change society by directing urban planning and social intervention agencies. It recognized that urban expansion was not haphazard but quite strongly controlled by community-level forces such as land values, zoning ordinances, landscape features, circulation corridors, and historical contingency. This was characterized as ecological because the external factors were neither chance nor intended, but rather arose from the natural forces in the environment which limit the adaptive spatial and temporal relationships between individuals. The school sought to derive patterns from a study of processes, rather than to ascribe processes to observed patterns and the patterns they saw emerge, are strongly reminiscent of Clements' ideas of community development. Conclusions The Chicago Area Project was a practical attempt by sociologists to apply their theories in a city laboratory. Subsequent research showed that the youth athletic leagues, recreation programs, and summer camp worked best along with urban planning and alternatives to incarceration as crime control policy. Such programs are non-entrepreneurial and non-self-sustaining, and they fail when local or central government does not make a sustained financial commitment to them. Although with hindsight, the school's attempts to map crime may have produced some distortions, the work was valuable in that it moved away from a study of pattern and place toward a study of function and scale. To that extent, this was work of high quality that represented the best science available to the researchers at the time. The Social Disorganization Theory itself was a landmark concept and, as it focuses on the absence or breakdown of social control mechanisms, there are obvious links with social control theory. Travis Hirschi (1969) argues that variations in delinquent behavior among youth could be explained by variations in the dimensions of the social bond, namely attachment to others, commitments to conventional goals, acceptance of conventional moral standards or beliefs, and involvement in conventional activities. The greater the social bonds between a youth and society, the lower the odds of involvement in delinquency. When social bonds to conventional role models, values and institutions are aggregated for youth in a particular setting, they measure much the same phenomena as captured by concepts such as network ties or social integration. But the fact that these theories focus on the absence of control or the barriers to progress, means that they are ignoring the societal pressures and cultural values that drive the system Merton identified in the Strain Theory or the motivational forces Cohen proposed were generating crime and delinquency. More modern theorists like Empey (1967) argue that the system of values, norms and beliefs can be disorganized in the sense that there are conflicts among values, norms and beliefs within a widely shared, dominant culture. While condemning crime in general, law-abiding citizens may nevertheless respect and admire the criminal who takes risks and successfully engages in exciting, dangerous activities. The depiction of a society as a collection of socially differentiated groups with distinct subcultural perspectives that lead some of these groups into conflict with the law is another form of cultural disorganization, is typically called cultural conflict. Modern versions of the theory sometimes use different terminology to refer to the same ecological causal processes. For example, Crutchfield, Geerken and Gove (1982) hypothesize that the social integration of communities is inhibited by population turnover and report supporting evidence in the explanation of variation in crime rates among cities. The greater the mobility of the population in a city, the higher the crime rates. These arguments are identical to those proposed by social disorganization theorists and the evidence in support of it is as indirect as the evidence cited by social disorganization theorists. But, by referring to social integration rather than disintegration, this research has not generated the same degree of criticism as social disorganization theory. See also Ruth Shonle Cavan Aristotelian philosopher, psychologist, and encyclopedist Mortimer Adler University President and reformer Robert Maynard Hutchins French sociologist and preceptor Gabriel Tarde References Further reading Bulmer, Martin. 1984. The Chicago School of Sociology: Institutionalization, Diversity, and the Rise of Sociological Research. Chicago: University of Chicago Press. [provides a comprehensive history of the Chicago school]. Burgess, Ernest, and Donald J. Bogue, eds. 1964. Contributions to Urban Sociology. Chicago: University of Chicago Press. . — 1967. Urban Sociology. Chicago: University of Chicago Press. . Bursik, Robert J. 1984. "Urban Dynamics and Ecological Studies of Delinquency." Social Forces 63:393–413. Gosnell, Harold Foote.1937. Machine Politics: Chicago Model. Hammersley, Martyn. 1989. The Dilemma of Qualitative Method: Herbert Blumer and the Chicago Tradition. London: Routledge. Hawley, Amos H. 1943. "Ecology and Human Ecology." Social Forces 22:398–405. — 1950. Human Ecology: A Theory of Community Structure. New York: Ronald Press. Konecki, Krzysztof T. 2017. "Qualitative Sociology." Pp. 143–52 (chap.13) in The Cambridge Handbook of Sociology, edited by K. O. Korgen. Core Areas in Sociology and the Development of the Discipline 1. Cambridge: Cambridge University Press. Kurtz, Lester R. 1984. Evaluating Chicago Sociology: A Guide to the Literature, with an Annotated Bibliography. Chicago: University of Chicago Press. . [provides a comprehensive history of the Chicago school]. McKenzie, Roderick D. 1924. "The Ecological Approach to the Study of the Human Community." American Journal of Sociology 30:287–301. Park, Robert E. 1915. "The City: Suggestions for the Investigation of Behavior in the City Environment." American Journal of Sociology 20:579–83. Stark, et al. 1983. "Beyond Durkheim." Journal for the Scientific Study of Religion 22:120–31. — 1938. "Urbanism as a Way of Life: The City and Contemporary Civilization." American Journal of Sociology 44:1–24. External links Howard Becker, "The Chicago School, So-called" Criminology Sociological theories Schools of sociological thought Urban sociology University of Chicago
0.761377
0.991785
0.755123
Einstein's thought experiments
A hallmark of Albert Einstein's career was his use of visualized thought experiments as a fundamental tool for understanding physical issues and for elucidating his concepts to others. Einstein's thought experiments took diverse forms. In his youth, he mentally chased beams of light. For special relativity, he employed moving trains and flashes of lightning to explain his most penetrating insights. For general relativity, he considered a person falling off a roof, accelerating elevators, blind beetles crawling on curved surfaces and the like. In his debates with Niels Bohr on the nature of reality, he proposed imaginary devices that attempted to show, at least in concept, how the Heisenberg uncertainty principle might be evaded. In a profound contribution to the literature on quantum mechanics, Einstein considered two particles briefly interacting and then flying apart so that their states are correlated, anticipating the phenomenon known as quantum entanglement. Introduction A thought experiment is a logical argument or mental model cast within the context of an imaginary (hypothetical or even counterfactual) scenario. A scientific thought experiment, in particular, may examine the implications of a theory, law, or set of principles with the aid of fictive and/or natural particulars (demons sorting molecules, cats whose lives hinge upon a radioactive disintegration, men in enclosed elevators) in an idealized environment (massless trapdoors, absence of friction). They describe experiments that, except for some specific and necessary idealizations, could conceivably be performed in the real world. As opposed to physical experiments, thought experiments do not report new empirical data. They can only provide conclusions based on deductive or inductive reasoning from their starting assumptions. Thought experiments invoke particulars that are irrelevant to the generality of their conclusions. It is the invocation of these particulars that give thought experiments their experiment-like appearance. A thought experiment can always be reconstructed as a straightforward argument, without the irrelevant particulars. John D. Norton, a well-known philosopher of science, has noted that "a good thought experiment is a good argument; a bad thought experiment is a bad argument." When effectively used, the irrelevant particulars that convert a straightforward argument into a thought experiment can act as "intuition pumps" that stimulate readers' ability to apply their intuitions to their understanding of a scenario. Thought experiments have a long history. Perhaps the best known in the history of modern science is Galileo's demonstration that falling objects must fall at the same rate regardless of their masses. This has sometimes been taken to be an actual physical demonstration, involving his climbing up the Leaning Tower of Pisa and dropping two heavy weights off it. In fact, it was a logical demonstration described by Galileo in Discorsi e dimostrazioni matematiche (1638). Einstein had a highly visual understanding of physics. His work in the patent office "stimulated [him] to see the physical ramifications of theoretical concepts." These aspects of his thinking style inspired him to fill his papers with vivid practical detail making them quite different from, say, the papers of Lorentz or Maxwell. This included his use of thought experiments. Special relativity Pursuing a beam of light Late in life, Einstein recalled Einstein's recollections of his youthful musings are widely cited because of the hints they provide of his later great discovery. However, Norton has noted that Einstein's reminiscences were probably colored by a half-century of hindsight. Norton lists several problems with Einstein's recounting, both historical and scientific: 1. At 16 years old and a student at the Gymnasium in Aarau, Einstein would have had the thought experiment in late 1895 to early 1896. But various sources note that Einstein did not learn Maxwell's theory until 1898, in university. 2. A 19th century aether theorist would have had no difficulties with the thought experiment. Einstein's statement, "...there seems to be no such thing...on the basis of experience," would not have counted as an objection, but would have represented a mere statement of fact, since no one had ever traveled at such speeds. 3. An aether theorist would have regarded "...nor according to Maxwell's equations" as simply representing a misunderstanding on Einstein's part. Unfettered by any notion that the speed of light represents a cosmic limit, the aether theorist would simply have set velocity equal to c, noted that yes indeed, the light would appear to be frozen, and then thought no more of it. Rather than the thought experiment being at all incompatible with aether theories (which it is not), the youthful Einstein appears to have reacted to the scenario out of an intuitive sense of wrongness. He felt that the laws of optics should obey the principle of relativity. As he grew older, his early thought experiment acquired deeper levels of significance: Einstein felt that Maxwell's equations should be the same for all observers in inertial motion. From Maxwell's equations, one can deduce a single speed of light, and there is nothing in this computation that depends on an observer's speed. Einstein sensed a conflict between Newtonian mechanics and the constant speed of light determined by Maxwell's equations. Regardless of the historical and scientific issues described above, Einstein's early thought experiment was part of the repertoire of test cases that he used to check on the viability of physical theories. Norton suggests that the real importance of the thought experiment was that it provided a powerful objection to emission theories of light, which Einstein had worked on for several years prior to 1905. Magnet and conductor In the very first paragraph of Einstein's seminal 1905 work introducing special relativity, he writes: This opening paragraph recounts well-known experimental results obtained by Michael Faraday in 1831. The experiments describe what appeared to be two different phenomena: the motional EMF generated when a wire moves through a magnetic field (see Lorentz force), and the transformer EMF generated by a changing magnetic field (due to the Maxwell–Faraday equation). James Clerk Maxwell himself drew attention to this fact in his 1861 paper On Physical Lines of Force. In the latter half of Part II of that paper, Maxwell gave a separate physical explanation for each of the two phenomena. Although Einstein calls the asymmetry "well-known", there is no evidence that any of Einstein's contemporaries considered the distinction between motional EMF and transformer EMF to be in any way odd or pointing to a lack of understanding of the underlying physics. Maxwell, for instance, had repeatedly discussed Faraday's laws of induction, stressing that the magnitude and direction of the induced current was a function only of the relative motion of the magnet and the conductor, without being bothered by the clear distinction between conductor-in-motion and magnet-in-motion in the underlying theoretical treatment. Yet Einstein's reflection on this experiment represented the decisive moment in his long and tortuous path to special relativity. Although the equations describing the two scenarios are entirely different, there is no measurement that can distinguish whether the magnet is moving, the conductor is moving, or both. In a 1920 review on the Fundamental Ideas and Methods of the Theory of Relativity (unpublished), Einstein related how disturbing he found this asymmetry: Einstein needed to extend the relativity of motion that he perceived between magnet and conductor in the above thought experiment to a full theory. For years, however, he did not know how this might be done. The exact path that Einstein took to resolve this issue is unknown. We do know, however, that Einstein spent several years pursuing an emission theory of light, encountering difficulties that eventually led him to give up the attempt. That decision ultimately led to his development of special relativity as a theory founded on two postulates. Einstein's original expression of these postulates was: "The laws governing the changes of the state of any physical system do not depend on which one of two coordinate systems in uniform translational motion relative to each other these changes of the state are referred to. Each ray of light moves in the coordinate system "at rest" with the definite velocity V independent of whether this ray of light is emitted by a body at rest or a body in motion." In their modern form: 1. The laws of physics take the same form in all inertial frames. 2. In any given inertial frame, the velocity of light c is the same whether the light be emitted by a body at rest or by a body in uniform motion. [Emphasis added by editor] Einstein's wording of the first postulate was one with which nearly all theorists of his day could agree. His second postulate expresses a new idea about the character of light. Modern textbooks combine the two postulates. One popular textbook expresses the second postulate as, "The speed of light in free space has the same value c in all directions and in all inertial reference frames." Trains, embankments, and lightning flashes The topic of how Einstein arrived at special relativity has been a fascinating one to many scholars: A lowly, twenty-six year old patent officer (third class), largely self-taught in physics and completely divorced from mainstream research, nevertheless in the year 1905 produced four extraordinary works (Annus Mirabilis papers), only one of which (his paper on Brownian motion) appeared related to anything that he had ever published before. Einstein's paper, On the Electrodynamics of Moving Bodies, is a polished work that bears few traces of its gestation. Documentary evidence concerning the development of the ideas that went into it consist of, quite literally, only two sentences in a handful of preserved early letters, and various later historical remarks by Einstein himself, some of them known only second-hand and at times contradictory. In regards to the relativity of simultaneity, Einstein's 1905 paper develops the concept vividly by carefully considering the basics of how time may be disseminated through the exchange of signals between clocks. In his popular work, Relativity: The Special and General Theory, Einstein translates the formal presentation of his paper into a thought experiment using a train, a railway embankment, and lightning flashes. The essence of the thought experiment is as follows: Observer M stands on an embankment, while observer M rides on a rapidly traveling train. At the precise moment that M and M coincide in their positions, lightning strikes points A and B equidistant from M and M. Light from these two flashes reach M at the same time, from which M concludes that the bolts were synchronous. The combination of Einstein's first and second postulates implies that, despite the rapid motion of the train relative to the embankment, M measures exactly the same speed of light as does M. Since M was equidistant from A and B when lightning struck, the fact that M receives light from B before light from A means that to M, the bolts were not synchronous. Instead, the bolt at B struck first. A routine supposition among historians of science is that, in accordance with the analysis given in his 1905 special relativity paper and in his popular writings, Einstein discovered the relativity of simultaneity by thinking about how clocks could be synchronized by light signals. The Einstein synchronization convention was originally developed by telegraphers in the middle 19th century. The dissemination of precise time was an increasingly important topic during this period. Trains needed accurate time to schedule use of track, cartographers needed accurate time to determine longitude, while astronomers and surveyors dared to consider the worldwide dissemination of time to accuracies of thousandths of a second. Following this line of argument, Einstein's position in the patent office, where he specialized in evaluating electromagnetic and electromechanical patents, would have exposed him to the latest developments in time technology, which would have guided him in his thoughts towards understanding the relativity of simultaneity. However, all of the above is supposition. In later recollections, when Einstein was asked about what inspired him to develop special relativity, he would mention his riding a light beam and his magnet and conductor thought experiments. He would also mention the importance of the Fizeau experiment and the observation of stellar aberration. "They were enough", he said. He never mentioned thought experiments about clocks and their synchronization. The routine analyses of the Fizeau experiment and of stellar aberration, that treat light as Newtonian corpuscles, do not require relativity. But problems arise if one considers light as waves traveling through an aether, which are resolved by applying the relativity of simultaneity. It is entirely possible, therefore, that Einstein arrived at special relativity through a different path than that commonly assumed, through Einstein's examination of Fizeau's experiment and stellar aberration. We therefore do not know just how important clock synchronization and the train and embankment thought experiment were to Einstein's development of the concept of the relativity of simultaneity. We do know, however, that the train and embankment thought experiment was the preferred means whereby he chose to teach this concept to the general public. Relativistic center-of-mass theorem Einstein proposed the equivalence of mass and energy in his final Annus Mirabilis paper. Over the next several decades, the understanding of energy and its relationship with momentum were further developed by Einstein and other physicists including Max Planck, Gilbert N. Lewis, Richard C. Tolman, Max von Laue (who in 1911 gave a comprehensive proof of from the stress–energy tensor), and Paul Dirac (whose investigations of negative solutions in his 1928 formulation of the energy–momentum relation led to the 1930 prediction of the existence of antimatter). Einstein's relativistic center-of-mass theorem of 1906 is a case in point. In 1900, Henri Poincaré had noted a paradox in modern physics as it was then understood: When he applied well-known results of Maxwell's equations to the equality of action and reaction, he could describe a cyclic process which would result in creation of a reactionless drive, i.e. a device which could displace its center of mass without the exhaust of a propellant, in violation of the conservation of momentum. Poincaré resolved this paradox by imagining electromagnetic energy to be a fluid having a given density, which is created and destroyed with a given momentum as energy is absorbed and emitted. The motions of this fluid would oppose displacement of the center of mass in such fashion as to preserve the conservation of momentum. Einstein demonstrated that Poincaré's artifice was superfluous. Rather, he argued that mass-energy equivalence was a necessary and sufficient condition to resolve the paradox. In his demonstration, Einstein provided a derivation of mass-energy equivalence that was distinct from his original derivation. Einstein began by recasting Poincaré's abstract mathematical argument into the form of a thought experiment: Einstein considered (a) an initially stationary, closed, hollow cylinder free-floating in space, of mass and length , (b) with some sort of arrangement for sending a quantity of radiative energy (a burst of photons) from the left to the right. The radiation has momentum Since the total momentum of the system is zero, the cylinder recoils with a speed (c) The radiation hits the other end of the cylinder in time (assuming ), bringing the cylinder to a stop after it has moved through a distance (d) The energy deposited on the right wall of the cylinder is transferred to a massless shuttle mechanism (e) which transports the energy to the left wall (f) and then returns to re-create the starting configuration of the system, except with the cylinder displaced to the left. The cycle may then be repeated. The reactionless drive described here violates the laws of mechanics, according to which the center of mass of a body at rest cannot be displaced in the absence of external forces. Einstein argued that the shuttle cannot be massless while transferring energy from the right to the left. If energy possesses the inertia the contradiction disappears. Modern analysis suggests that neither Einstein's original 1905 derivation of mass-energy equivalence nor the alternate derivation implied by his 1906 center-of-mass theorem are definitively correct. For instance, the center-of-mass thought experiment regards the cylinder as a completely rigid body. In reality, the impulse provided to the cylinder by the burst of light in step (b) cannot travel faster than light, so that when the burst of photons reaches the right wall in step (c), the wall has not yet begun to move. Ohanian has credited von Laue (1911) as having provided the first truly definitive derivation of . Impossibility of faster-than-light signaling In 1907, Einstein noted that from the composition law for velocities, one could deduce that there cannot exist an effect that allows faster-than-light signaling. Einstein imagined a strip of material that allows propagation of signals at the faster-than-light speed of (as viewed from the material strip). Imagine two observers, A and B, standing on the x-axis and separated by the distance . They stand next to the material strip, which is not at rest, but rather is moving in the negative x-direction with speed . A uses the strip to send a signal to B. From the velocity composition formula, the signal propagates from A to B with speed . The time required for the signal to propagate from A to B is given by The strip can move at any speed . Given the starting assumption , one can always set the strip moving at a speed such that . In other words, given the existence of a means of transmitting signals faster-than-light, scenarios can be envisioned whereby the recipient of a signal will receive the signal before the transmitter has transmitted it. About this thought experiment, Einstein wrote: General relativity Falling painters and accelerating elevators In his unpublished 1920 review, Einstein related the genesis of his thoughts on the equivalence principle: The realization "startled" Einstein, and inspired him to begin an eight-year quest that led to what is considered to be his greatest work, the theory of general relativity. Over the years, the story of the falling man has become an iconic one, much embellished by other writers. In most retellings of Einstein's story, the falling man is identified as a painter. In some accounts, Einstein was inspired after he witnessed a painter falling from the roof of a building adjacent to the patent office where he worked. This version of the story leaves unanswered the question of why Einstein might consider his observation of such an unfortunate accident to represent the happiest thought in his life. Einstein later refined his thought experiment to consider a man inside a large enclosed chest or elevator falling freely in space. While in free fall, the man would consider himself weightless, and any loose objects that he emptied from his pockets would float alongside him. Then Einstein imagined a rope attached to the roof of the chamber. A powerful "being" of some sort begins pulling on the rope with constant force. The chamber begins to move "upwards" with a uniformly accelerated motion. Within the chamber, all of the man's perceptions are consistent with his being in a uniform gravitational field. Einstein asked, "Ought we to smile at the man and say that he errs in his conclusion?" Einstein answered no. Rather, the thought experiment provided "good grounds for extending the principle of relativity to include bodies of reference which are accelerated with respect to each other, and as a result we have gained a powerful argument for a generalised postulate of relativity." Through this thought experiment, Einstein addressed an issue that was so well known, scientists rarely worried about it or considered it puzzling: Objects have "gravitational mass," which determines the force with which they are attracted to other objects. Objects also have "inertial mass," which determines the relationship between the force applied to an object and how much it accelerates. Newton had pointed out that, even though they are defined differently, gravitational mass and inertial mass always seem to be equal. But until Einstein, no one had conceived a good explanation as to why this should be so. From the correspondence revealed by his thought experiment, Einstein concluded that "it is impossible to discover by experiment whether a given system of coordinates is accelerated, or whether...the observed effects are due to a gravitational field." This correspondence between gravitational mass and inertial mass is the equivalence principle. An extension to his accelerating observer thought experiment allowed Einstein to deduce that "rays of light are propagated curvilinearly in gravitational fields." Early applications of the equivalence principle Einstein's formulation of special relativity was in terms of kinematics (the study of moving bodies without reference to forces). Late in 1907, his former mathematics professor, Hermann Minkowski, presented an alternative, geometric interpretation of special relativity in a lecture to the Göttingen Mathematical society, introducing the concept of spacetime. Einstein was initially dismissive of Minkowski's geometric interpretation, regarding it as überflüssige Gelehrsamkeit (superfluous learnedness). As with special relativity, Einstein's early results in developing what was ultimately to become general relativity were accomplished using kinematic analysis rather than geometric techniques of analysis. In his 1907 Jahrbuch paper, Einstein first addressed the question of whether the propagation of light is influenced by gravitation, and whether there is any effect of a gravitational field on clocks. In 1911, Einstein returned to this subject, in part because he had realized that certain predictions of his nascent theory were amenable to experimental test. By the time of his 1911 paper, Einstein and other scientists had offered several alternative demonstrations that the inertial mass of a body increases with its energy content: If the energy increase of the body is , then the increase in its inertial mass is Einstein asked whether there is an increase of gravitational mass corresponding to the increase in inertial mass, and if there is such an increase, is the increase in gravitational mass precisely the same as its increase in inertial mass? Using the equivalence principle, Einstein concluded that this must be so. To show that the equivalence principle necessarily implies the gravitation of energy, Einstein considered a light source separated along the z-axis by a distance above a receiver in a homogeneous gravitational field having a force per unit mass of 1 A certain amount of electromagnetic energy is emitted by towards According to the equivalence principle, this system is equivalent to a gravitation-free system which moves with uniform acceleration in the direction of the positive z-axis, with separated by a constant distance from In the accelerated system, light emitted from takes (to a first approximation) to arrive at But in this time, the velocity of will have increased by from its velocity when the light was emitted. The energy arriving at will therefore not be the energy but the greater energy given by According to the equivalence principle, the same relation holds for the non-accelerated system in a gravitational field, where we replace by the gravitational potential difference between and so that The energy arriving at is greater than the energy emitted by by the potential energy of the mass in the gravitational field. Hence corresponds to the gravitational mass as well as the inertial mass of a quantity of energy. To further clarify that the energy of gravitational mass must equal the energy of inertial mass, Einstein proposed the following cyclic process: (a) A light source is situated a distance above a receiver in a uniform gravitational field. A movable mass can shuttle between and (b) A pulse of electromagnetic energy is sent from to The energy is absorbed by (c) Mass is lowered from to releasing an amount of work equal to (d) The energy absorbed by is transferred to This increases the gravitational mass of to a new value (e) The mass is lifted back to , requiring the input of work (e) The energy carried by the mass is then transferred to completing the cycle. Conservation of energy demands that the difference in work between raising the mass and lowering the mass, , must equal or one could potentially define a perpetual motion machine. Therefore, In other words, the increase in gravitational mass predicted by the above arguments is precisely equal to the increase in inertial mass predicted by special relativity. Einstein then considered sending a continuous electromagnetic beam of frequency (as measured at ) from to in a homogeneous gravitational field. The frequency of the light as measured at will be a larger value given by Einstein noted that the above equation seemed to imply something absurd: Given that the transmission of light from to is continuous, how could the number of periods emitted per second from be different from that received at It is impossible for wave crests to appear on the way down from to . The simple answer is that this question presupposes an absolute nature of time, when in fact there is nothing that compels us to assume that clocks situated at different gravitational potentials must be conceived of as going at the same rate. The principle of equivalence implies gravitational time dilation. It is important to realize that Einstein's arguments predicting gravitational time dilation are valid for any theory of gravity that respects the principle of equivalence. This includes Newtonian gravitation. Experiments such as the Pound–Rebka experiment, which have firmly established gravitational time dilation, therefore do not serve to distinguish general relativity from Newtonian gravitation. In the remainder of Einstein's 1911 paper, he discussed the bending of light rays in a gravitational field, but given the incomplete nature of Einstein's theory as it existed at the time, the value that he predicted was half the value that would later be predicted by the full theory of general relativity. Non-Euclidean geometry and the rotating disk By 1912, Einstein had reached an impasse in his kinematic development of general relativity, realizing that he needed to go beyond the mathematics that he knew and was familiar with. Stachel has identified Einstein's analysis of the rigid relativistic rotating disk as being key to this realization. The rigid rotating disk had been a topic of lively discussion since Max Born and Paul Ehrenfest, in 1909, both presented analyses of rigid bodies in special relativity. An observer on the edge of a rotating disk experiences an apparent ("fictitious" or "pseudo") force called "centrifugal force". By 1912, Einstein had become convinced of a close relationship between gravitation and pseudo-forces such as centrifugal force: In the accompanying illustration, A represents a circular disk of 10 units diameter at rest in an inertial reference frame. The circumference of the disk is times the diameter, and the illustration shows 31.4 rulers laid out along the circumference. B represents a circular disk of 10 units diameter that is spinning rapidly. According to a non-rotating observer, each of the rulers along the circumference is length-contracted along its line of motion. More rulers are required to cover the circumference, while the number of rulers required to span the diameter is unchanged. Note that we have not stated that we set A spinning to get B. In special relativity, it is not possible to set spinning a disk that is "rigid" in Born's sense of the term. Since spinning up disk A would cause the material to contract in the circumferential direction but not in the radial direction, a rigid disk would become fragmented from the induced stresses. In later years, Einstein repeatedly stated that consideration of the rapidly rotating disk was of "decisive importance" to him because it showed that a gravitational field causes non-Euclidean arrangements of measuring rods. Einstein realized that he did not have the mathematical skills to describe the non-Euclidean view of space and time that he envisioned, so he turned to his mathematician friend, Marcel Grossmann, for help. After researching in the library, Grossman found a review article by Ricci and Levi-Civita on absolute differential calculus (tensor calculus). Grossman tutored Einstein on the subject, and in 1913 and 1914, they published two joint papers describing an initial version of a generalized theory of gravitation. Over the next several years, Einstein used these mathematical tools to generalize Minkowski's geometric approach to relativity so as to encompass curved spacetime. Quantum mechanics Background: Einstein and the quantum Many myths have grown up about Einstein's relationship with quantum mechanics. Freshman physics students are aware that Einstein explained the photoelectric effect and introduced the concept of the photon. But students who have grown up with the photon may not be aware of how revolutionary the concept was for his time. The best-known factoids about Einstein's relationship with quantum mechanics are his statement, "God does not play dice with the universe" and the indisputable fact that he just did not like the theory in its final form. This has led to the general impression that, despite his initial contributions, Einstein was out of touch with quantum research and played at best a secondary role in its development. Concerning Einstein's estrangement from the general direction of physics research after 1925, his well-known scientific biographer, Abraham Pais, wrote: In hindsight, we know that Pais was incorrect in his assessment. Einstein was arguably the greatest single contributor to the "old" quantum theory. In his 1905 paper on light quanta, Einstein created the quantum theory of light. His proposal that light exists as tiny packets (photons) was so revolutionary, that even such major pioneers of quantum theory as Planck and Bohr refused to believe that it could be true. Bohr, in particular, was a passionate disbeliever in light quanta, and repeatedly argued against them until 1925, when he yielded in the face of overwhelming evidence for their existence. In his 1906 theory of specific heats, Einstein was the first to realize that quantized energy levels explained the specific heat of solids. In this manner, he found a rational justification for the third law of thermodynamics (i.e. the entropy of any system approaches zero as the temperature approaches absolute zero): at very cold temperatures, atoms in a solid do not have enough thermal energy to reach even the first excited quantum level, and so cannot vibrate. Einstein proposed the wave–particle duality of light. In 1909, using a rigorous fluctuation argument based on a thought experiment and drawing on his previous work on Brownian motion, he predicted the emergence of a "fusion theory" that would combine the two views. Basically, he demonstrated that the Brownian motion experienced by a mirror in thermal equilibrium with black-body radiation would be the sum of two terms, one due to the wave properties of radiation, the other due to its particulate properties. Although Planck is justly hailed as the father of quantum mechanics, his derivation of the law of black-body radiation rested on fragile ground, since it required ad hoc assumptions of an unreasonable character. Furthermore, Planck's derivation represented an analysis of classical harmonic oscillators merged with quantum assumptions in an improvised fashion. In his 1916 theory of radiation, Einstein was the first to create a purely quantum explanation. This paper, well known for broaching the possibility of stimulated emission (the basis of the laser), changed the nature of the evolving quantum theory by introducing the fundamental role of random chance. In 1924, Einstein received a short manuscript by an unknown Indian professor, Satyendra Nath Bose, outlining a new method of deriving the law of blackbody radiation. Einstein was intrigued by Bose's peculiar method of counting the number of distinct ways of putting photons into the available states, a method of counting that Bose apparently did not realize was unusual. Einstein, however, understood that Bose's counting method implied that photons are, in a deep sense, indistinguishable. He translated the paper into German and had it published. Einstein then followed Bose's paper with an extension to Bose's work which predicted Bose–Einstein condensation, one of the fundamental research topics of condensed matter physics. While trying to develop a mathematical theory of light which would fully encompass its wavelike and particle-like aspects, Einstein developed the concept of "ghost fields". A guiding wave obeying Maxwell's classical laws would propagate following the normal laws of optics, but would not transmit any energy. This guiding wave, however, would govern the appearance of quanta of energy on a statistical basis, so that the appearance of these quanta would be proportional to the intensity of the interference radiation. These ideas became widely known in the physics community, and through Born's work in 1926, later became a key concept in the modern quantum theory of radiation and matter. Therefore, Einstein before 1925 originated most of the key concepts of quantum theory: light quanta, wave–particle duality, the fundamental randomness of physical processes, the concept of indistinguishability, and the probability density interpretation of the wave equation. In addition, Einstein can arguably be considered the father of solid state physics and condensed matter physics. He provided a correct derivation of the blackbody radiation law and sparked the notion of the laser. What of after 1925? In 1935, working with two younger colleagues, Einstein issued a final challenge to quantum mechanics, attempting to show that it could not represent a final solution. Despite the questions raised by this paper, it made little or no difference to how physicists employed quantum mechanics in their work. Of this paper, Pais was to write: In contrast to Pais' negative assessment, this paper, outlining the EPR paradox, has become one of the most widely cited articles in the entire physics literature. It is considered the centerpiece of the development of quantum information theory, which has been termed the "third quantum revolution." Wave–particle duality All of Einstein's major contributions to the old quantum theory were arrived at via statistical argument. This includes his 1905 paper arguing that light has particle properties, his 1906 work on specific heats, his 1909 introduction of the concept of wave–particle duality, his 1916 work presenting an improved derivation of the blackbody radiation formula, and his 1924 work that introduced the concept of indistinguishability. Einstein's 1909 arguments for the wave–particle duality of light were based on a thought experiment. Einstein imagined a mirror in a cavity containing particles of an ideal gas and filled with black-body radiation, with the entire system in thermal equilibrium. The mirror is constrained in its motions to a direction perpendicular to its surface. The mirror jiggles from Brownian motion due to collisions with the gas molecules. Since the mirror is in a radiation field, the moving mirror transfers some of its kinetic energy to the radiation field as a result of the difference in the radiation pressure between its forwards and reverse surfaces. This implies that there must be fluctuations in the black-body radiation field, and hence fluctuations in the black-body radiation pressure. Reversing the argument shows that there must be a route for the return of energy from the fluctuating black-body radiation field back to the gas molecules. Given the known shape of the radiation field given by Planck's law, Einstein could calculate the mean square energy fluctuation of the black-body radiation. He found the root mean square energy fluctuation in a small volume of a cavity filled with thermal radiation in the frequency interval between and to be a function of frequency and temperature: where would be the average energy of the volume in contact with the thermal bath. The above expression has two terms, the second corresponding to the classical Rayleigh-Jeans law (i.e. a wavelike term), and the first corresponding to the Wien distribution law (which from Einstein's 1905 analysis, would result from point-like quanta with energy ). From this, Einstein concluded that radiation had simultaneous wave and particle aspects. Bubble paradox From 1905 to 1923, Einstein was virtually the only physicist who took light-quanta seriously. Throughout most of this period, the physics community treated the light-quanta hypothesis with "skepticism bordering on derision" and maintained this attitude even after Einstein's photoelectric law was validated. The citation for Einstein's 1922 Nobel Prize very deliberately avoided all mention of light-quanta, instead stating that it was being awarded for "his services to theoretical physics and especially for his discovery of the law of the photoelectric effect". This dismissive stance contrasts sharply with the enthusiastic manner in which Einstein's other major contributions were accepted, including his work on Brownian motion, special relativity, general relativity, and his numerous other contributions to the "old" quantum theory. Various explanations have been given for this neglect on the part of the physics community. First and foremost was wave theory's long and indisputable success in explaining purely optical phenomena. Second was the fact that his 1905 paper, which pointed out that certain phenomena would be more readily explained under the assumption that light is particulate, presented the hypothesis only as a "heuristic viewpoint". The paper offered no compelling, comprehensive alternative to existing electromagnetic theory. Third was the fact that his 1905 paper introducing light quanta and his two 1909 papers that argued for a wave–particle fusion theory approached their subjects via statistical arguments that his contemporaries "might accept as theoretical exercise—crazy, perhaps, but harmless". Most of Einstein's contemporaries adopted the position that light is ultimately a wave, but appears particulate in certain circumstances only because atoms absorb wave energy in discrete units. Among the thought experiments that Einstein presented in his 1909 lecture on the nature and constitution of radiation was one that he used to point out the implausibility of the above argument. He used this thought experiment to argue that atoms emit light as discrete particles rather than as continuous waves: (a) An electron in a cathode ray beam strikes an atom in a target. The intensity of the beam is set so low that we can consider one electron at a time as impinging on the target. (b) The atom emits a spherically radiating electromagnetic wave. (c) This wave excites an atom in a secondary target, causing it to release an electron of energy comparable to that of the original electron. The energy of the secondary electron depends only on the energy of the original electron and not at all on the distance between the primary and secondary targets. All the energy spread around the circumference of the radiating electromagnetic wave would appear to be instantaneously focused on the target atom, an action that Einstein considered implausible. Far more plausible would be to say that the first atom emitted a particle in the direction of the second atom. Although Einstein originally presented this thought experiment as an argument for light having a particulate nature, it has been noted that this thought experiment, which has been termed the "bubble paradox", foreshadows the famous 1935 EPR paper. In his 1927 Solvay debate with Bohr, Einstein employed this thought experiment to illustrate that according to the Copenhagen interpretation of quantum mechanics that Bohr championed, the quantum wavefunction of a particle would abruptly collapse like a "popped bubble" no matter how widely dispersed the wavefunction. The transmission of energy from opposite sides of the bubble to a single point would occur faster than light, violating the principle of locality. In the end, it was experiment, not any theoretical argument, that finally enabled the concept of the light quantum to prevail. In 1923, Arthur Compton was studying the scattering of high energy X-rays from a graphite target. Unexpectedly, he found that the scattered X-rays were shifted in wavelength, corresponding to inelastic scattering of the X-rays by the electrons in the target. His observations were totally inconsistent with wave behavior, but instead could only be explained if the X-rays acted as particles. This observation of the Compton effect rapidly brought about a change in attitude, and by 1926, the concept of the "photon" was generally accepted by the physics community. Einstein's light box Einstein did not like the direction in which quantum mechanics had turned after 1925. Although excited by Heisenberg's matrix mechanics, Schroedinger's wave mechanics, and Born's clarification of the meaning of the Schroedinger wave equation (i.e. that the absolute square of the wave function is to be interpreted as a probability density), his instincts told him that something was missing. In a letter to Born, he wrote: The Solvay Debates between Bohr and Einstein began in dining-room discussions at the Fifth Solvay International Conference on Electrons and Photons in 1927. Einstein's issue with the new quantum mechanics was not just that, with the probability interpretation, it rendered invalid the notion of rigorous causality. After all, as noted above, Einstein himself had introduced random processes in his 1916 theory of radiation. Rather, by defining and delimiting the maximum amount of information obtainable in a given experimental arrangement, the Heisenberg uncertainty principle denied the existence of any knowable reality in terms of a complete specification of the momenta and description of individual particles, an objective reality that would exist whether or not we could ever observe it. Over dinner, during after-dinner discussions, and at breakfast, Einstein debated with Bohr and his followers on the question whether quantum mechanics in its present form could be called complete. Einstein illustrated his points with increasingly clever thought experiments intended to prove that position and momentum could in principle be simultaneously known to arbitrary precision. For example, one of his thought experiments involved sending a beam of electrons through a shuttered screen, recording the positions of the electrons as they struck a photographic screen. Bohr and his allies would always be able to counter Einstein's proposal, usually by the end of the same day. On the final day of the conference, Einstein revealed that the uncertainty principle was not the only aspect of the new quantum mechanics that bothered him. Quantum mechanics, at least in the Copenhagen interpretation, appeared to allow action at a distance, the ability for two separated objects to communicate at speeds greater than light. By 1928, the consensus was that Einstein had lost the debate, and even his closest allies during the Fifth Solvay Conference, for example Louis de Broglie, conceded that quantum mechanics appeared to be complete. At the Sixth Solvay International Conference on Magnetism (1930), Einstein came armed with a new thought experiment. This involved a box with a shutter that operated so quickly, it would allow only one photon to escape at a time. The box would first be weighed exactly. Then, at a precise moment, the shutter would open, allowing a photon to escape. The box would then be re-weighed. The well-known relationship between mass and energy would allow the energy of the particle to be precisely determined. With this gadget, Einstein believed that he had demonstrated a means to obtain, simultaneously, a precise determination of the energy of the photon as well as its exact time of departure from the system. Bohr was shaken by this thought experiment. Unable to think of a refutation, he went from one conference participant to another, trying to convince them that Einstein's thought experiment could not be true, that if it were true, it would literally mean the end of physics. After a sleepless night, he finally worked out a response which, ironically, depended on Einstein's general relativity. Consider the illustration of Einstein's light box: 1. After emitting a photon, the loss of weight causes the box to rise in the gravitational field. 2. The observer returns the box to its original height by adding weights until the pointer points to its initial position. It takes a certain amount of time for the observer to perform this procedure. How long it takes depends on the strength of the spring and on how well-damped the system is. If undamped, the box will bounce up and down forever. If over-damped, the box will return to its original position sluggishly (See Damped spring-mass system). 3. The longer that the observer allows the damped spring-mass system to settle, the closer the pointer will reach its equilibrium position. At some point, the observer will conclude that his setting of the pointer to its initial position is within an allowable tolerance. There will be some residual error in returning the pointer to its initial position. Correspondingly, there will be some residual error in the weight measurement. 4. Adding the weights imparts a momentum to the box which can be measured with an accuracy delimited by It is clear that where is the gravitational constant. Plugging in yields 5. General relativity informs us that while the box has been at a height different than its original height, it has been ticking at a rate different than its original rate. The red shift formula informs us that there will be an uncertainty in the determination of the emission time of the photon. 6. Hence, The accuracy with which the energy of the photon is measured restricts the precision with which its moment of emission can be measured, following the Heisenberg uncertainty principle. After finding his last attempt at finding a loophole around the uncertainty principle refuted, Einstein quit trying to search for inconsistencies in quantum mechanics. Instead, he shifted his focus to the other aspects of quantum mechanics with which he was uncomfortable, focusing on his critique of action at a distance. His next paper on quantum mechanics foreshadowed his later paper on the EPR paradox. Einstein was gracious in his defeat. The following September, Einstein nominated Heisenberg and Schroedinger for the Nobel Prize, stating, "I am convinced that this theory undoubtedly contains a part of the ultimate truth." EPR paradox Einstein's fundamental dispute with quantum mechanics was not about whether God rolled dice, whether the uncertainty principle allowed simultaneous measurement of position and momentum, or even whether quantum mechanics was complete. It was about reality. Does a physical reality exist independent of our ability to observe it? To Bohr and his followers, such questions were meaningless. All that we can know are the results of measurements and observations. It makes no sense to speculate about an ultimate reality that exists beyond our perceptions. Einstein's beliefs had evolved over the years from those that he had held when he was young, when, as a logical positivist heavily influenced by his reading of David Hume and Ernst Mach, he had rejected such unobservable concepts as absolute time and space. Einstein believed: 1. A reality exists independent of our ability to observe it. 2. Objects are located at distinct points in spacetime and have their own independent, real existence. In other words, he believed in separability and locality. 3. Although at a superficial level, quantum events may appear random, at some ultimate level, strict causality underlies all processes in nature. Einstein considered that realism and localism were fundamental underpinnings of physics. After leaving Nazi Germany and settling in Princeton at the Institute for Advanced Study, Einstein began writing up a thought experiment that he had been mulling over since attending a lecture by Léon Rosenfeld in 1933. Since the paper was to be in English, Einstein enlisted the help of the 46-year-old Boris Podolsky, a fellow who had moved to the institute from Caltech; he also enlisted the help of the 26-year-old Nathan Rosen, also at the institute, who did much of the math. The result of their collaboration was the four page EPR paper, which in its title asked the question Can Quantum-Mechanical Description of Physical Reality be Considered Complete? After seeing the paper in print, Einstein found himself unhappy with the result. His clear conceptual visualization had been buried under layers of mathematical formalism. Einstein's thought experiment involved two particles that have collided or which have been created in such a way that they have properties which are correlated. The total wave function for the pair links the positions of the particles as well as their linear momenta. The figure depicts the spreading of the wave function from the collision point. However, observation of the position of the first particle allows us to determine precisely the position of the second particle no matter how far the pair have separated. Likewise, measuring the momentum of the first particle allows us to determine precisely the momentum of the second particle. "In accordance with our criterion for reality, in the first case we must consider the quantity P as being an element of reality, in the second case the quantity Q is an element of reality." Einstein concluded that the second particle, which we have never directly observed, must have at any moment a position that is real and a momentum that is real. Quantum mechanics does not account for these features of reality. Therefore, quantum mechanics is not complete. It is known, from the uncertainty principle, that position and momentum cannot be measured at the same time. But even though their values can only be determined in distinct contexts of measurement, can they both be definite at the same time? Einstein concluded that the answer must be yes. The only alternative, claimed Einstein, would be to assert that measuring the first particle instantaneously affected the reality of the position and momentum of the second particle. "No reasonable definition of reality could be expected to permit this." Bohr was stunned when he read Einstein's paper and spent more than six weeks framing his response, which he gave exactly the same title as the EPR paper. The EPR paper forced Bohr to make a major revision in his understanding of complementarity in the Copenhagen interpretation of quantum mechanics. Prior to EPR, Bohr had maintained that disturbance caused by the act of observation was the physical explanation for quantum uncertainty. In the EPR thought experiment, however, Bohr had to admit that "there is no question of a mechanical disturbance of the system under investigation." On the other hand, he noted that the two particles were one system described by one quantum function. Furthermore, the EPR paper did nothing to dispel the uncertainty principle. Later commentators have questioned the strength and coherence of Bohr's response. As a practical matter, however, physicists for the most part did not pay much attention to the debate between Bohr and Einstein, since the opposing views did not affect one's ability to apply quantum mechanics to practical problems, but only affected one's interpretation of the quantum formalism. If they thought about the problem at all, most working physicists tended to follow Bohr's leadership. In 1964, John Stewart Bell made the groundbreaking discovery that Einstein's local realist world view made experimentally verifiable predictions that would be in conflict with those of quantum mechanics. Bell's discovery shifted the Einstein–Bohr debate from philosophy to the realm of experimental physics. Bell's theorem showed that, for any local realist formalism, there exist limits on the predicted correlations between pairs of particles in an experimental realization of the EPR thought experiment. In 1972, the first experimental tests were carried out that demonstrated violation of these limits. Successive experiments improved the accuracy of observation and closed loopholes. To date, it is virtually certain that local realist theories have been falsified. The EPR paper has recently been recognized as prescient, since it identified the phenomenon of quantum entanglement, which has inspired approaches to quantum mechanics different from the Copenhagen interpretation, and has been at the forefront of major technological advances in quantum computing, quantum encryption, and quantum information theory. Notes Primary sources References External links NOVA: Inside Einstein's Mind (2015) — Retrace the thought experiments that inspired his theory on the nature of reality. Special relativity General relativity History of physics Thought experiments in quantum mechanics Albert Einstein
0.761506
0.991564
0.755081
The Archaeology of Knowledge
The Archaeology of Knowledge (L’archéologie du savoir, 1969) by Michel Foucault is a treatise about the methodology and historiography of the systems of thought (epistemes) and of knowledge (discursive formations) which follow rules that operate beneath the consciousness of the subject individuals, and which define a conceptual system of possibility that determines the boundaries of language and thought used in a given time and domain. The archaeology of knowledge is the analytical method that Foucault used in Madness and Civilization: A History of Insanity in the Age of Reason (1961), The Birth of the Clinic: An Archaeology of Medical Perception (1963), and The Order of Things: An Archaeology of the Human Sciences (1966). Summary The contemporary study of the History of Ideas concerns the transitions between historical world-views, but ultimately depends upon narrative continuities that break down under close inspection. The history of ideas marks points of discontinuity between broadly defined modes of knowledge, but those existing modes of knowledge are not discrete structures among the complex relations of historical discourse. Discourses emerge and transform according to a complex set of relationships (discursive and institutional) defined by discontinuities and unified themes. An énoncé (statement) is a discourse, a way of speaking; the methodology studies only the “things said” as emergences and transformations, without speculation about the collective meaning of the statements of the things said. A statement is the set of rules that makes an expression — a phrase, a proposition, an act of speech — into meaningful discourse, and is conceptually different from signification; thus, the expression “The gold mountain is in California” is discursively meaningless if it is unrelated to the geographic reality of California. Therefore, the function of existence is necessary for an énoncé (statement) to have a discursive meaning. As a set of rules, the statement has special meaning in the archaeology of knowledge, because it is the rules that render an expression discursively meaningful, while the syntax and the semantics are additional rules that make an expression significative. The structures of syntax and the structures of semantics are insufficient to determine the discursive meaning of an expression; whether or not an expression complies with the rules of discursive meaning, a grammatically correct sentence might lack discursive meaning; inversely, a grammatically incorrect sentence might be discursively meaningful; even when a group of letters are combined in such a way that no recognizable lexical item is formulated can possess discursive meaning, e.g. QWERTY identifies a type of keyboard layout for typewriters and computers. The meaning of an expression depends upon the conditions in which the expression emerges and exists within the discourse of a field or the discourse of a discipline; the discursive meaning of an expression is determined by the statements that precede and follow it. To wit, the énoncés (statements) constitute a network of rules that establish which expressions are discursively meaningful; the rules are the preconditions for signifying propositions, utterances, and acts of speech to have discursive meaning. The analysis then deals with the organized dispersion of statements, discursive formations, and Foucault reiterates that the outlined archaeology of knowledge is one possible method of historical analysis. Reception The philosopher Gilles Deleuze describes The Archaeology of Knowledge as, "the most decisive step yet taken in the theory-practice of multiplicities." See also Foucauldian discourse analysis References Further reading Deleuze, Gilles. 1986. Foucault. Trans. Sean Hand. London: Althone, 1988. . Foucault, Michel. 1969. The Archaeology of Knowledge. Trans. A. M. Sheridan Smith. London and New York: Routledge, 2002. . 1969 non-fiction books Books about discourse analysis Éditions Gallimard books French non-fiction books Philosophy books Works by Michel Foucault
0.763653
0.988773
0.755079
Contextual design
Contextual design (CD) is a user-centered design process developed by Hugh Beyer and Karen Holtzblatt. It incorporates ethnographic methods for gathering data relevant to the product via field studies, rationalizing workflows, and designing human–computer interfaces. In practice, this means that researchers aggregate data from customers in the field where people are living and applying these findings into a final product. Contextual design can be seen as an alternative to engineering and feature driven models of creating new systems. Process overview The contextual design process consists of the following top-level steps: contextual inquiry, interpretation, data consolidation, visioning, storyboarding, user environment design, and prototyping. Collecting data – contextual inquiry Contextual inquiry is a field data collection technique used to capture detailed information about how users of a product interact with the product in their normal work environment. This information is captured by both observations of user behavior and conversations with the user while she or he works. A key aspect of the technique is to partner with the user, letting their work and the issues they encounter guide the interview. Key takeaways from the technique are to learn what users actually do, why they do it that way, latent needs, desires, and core values. Interpretation Data from each interview is analyzed and key issues and insights are captured. Detailed work models are also created in order to understand the different aspects of the work that matter for design. Contextual design consists of five work models which are used to model the work tasks and details of the working environment. These work models are: Flow model – represents the coordination, communication, interaction, roles, and responsibilities of the people in a certain work practice Sequence model – represents the steps users go through to accomplish a certain activity, including breakdowns Cultural model – represents the norms, influences, and pressures that are present in the work environment Artifact model – represents the documents or other physical things that are created while working or are used to support the work. Artifacts often have a structure or styling that could represent the user's way of structuring the work Physical model – represents the physical environment where the work tasks are accomplished; often, there are multiple physical models representing, e.g., office layout, network topology, or the layout of tools on a computer display. Data consolidation Data from individual customer interviews are analyzed in order to reveal patterns and the structure across distinct interviews. Models of the same type can be consolidated together (but not generalized—detail must be maintained). Another method of processing the observations is making an affinity diagram ("wall"), as described by Beyer & Holtzblatt: A single observation is written on each piece of paper. Individual notes are grouped according to the similarity of their contents. These groups are labeled with colored Post-it notes, each color representing a distinct level in the hierarchy. Then the groups are combined with other groups to get the final construct of observations in a hierarchy of up to three levels. Beyer & Holtzblatt propose the following color-coding convention for grouping the notes, from lowest to highest level in the hierarchy: White notes – individual notes captured during interpretation, also known as "affinity notes" Blue notes – summaries of groups of white notes that convey all the relevant details Pink notes – summaries of groups of blue notes that reveal key issues in the data Green notes – labels identifying an area of concern indicated by pink notes Beyer & Holtzblatt emphasize the importance of building the entire affinity diagram in one or two sessions rather than building smaller affinity diagrams over many sessions. This immersion in the data for an extended period of time helps teams see the broad scope of a problem quickly and encourages a paradigm shift of thought rather than assimilation of ideas. The design ideas and relevant issues that arise during the process should be included in the affinity diagram. Any holes in the data and areas that need more information should also be labeled. After completing the wall, participants "walk" the affinity diagram to stimulate new ideas and identify any remaining issues or holes in data. The affinity diagram is a bottom-up method. Consolidated data may also be used to create a cause-and-effect diagram or a set of personas describing typical users of the proposed system. Visioning In visioning, a cross-functional team comes together to create stories of how new product concepts, services, and technology can better support the user work practice. The visioning team starts by reviewing the data to identify key issues and opportunities. The data walking session is followed by a group visioning session during which the visioning team generates a variety of new product concepts by telling stories of different usage scenarios based on the data collected. A vision includes the system, its delivery, and support structures to make the new work practice successful, but is told from the user's point of view. Storyboarding After visioning, the team develops the vision in storyboards, capturing scenarios of how people will work with the new system. Understanding the current way of working, its structure and the complete workflow helps the design team address the problems and design the new workflow. Storyboards work out the details of the vision, guided by the consolidated data, using pictures and text in a series of hand-drawn cells. User Environment Design The User Environment Design captures the floor plan of the new system. It shows each part of the system, how it supports the user's work, exactly what function is available in that part, and how the user gets to and from other parts of the system. Contextual design uses the User Environment Design (UED) diagram, which displays the focus areas, i.e., areas which are visible to the user or which are relevant to the user. Focus areas can be defined further as functions in a system that support a certain type or part of the work. The UED also presents how the focus areas relate to each other and shows the links between focus areas. Prototyping Testing the design ideas with paper prototypes or even with more sophisticated interactive prototypes before the implementation phase helps the designers communicate with users about the new system and develop the design further. Prototypes test the structure of a User Environment Design and initial user interface ideas, as well as the understanding of the work, before the implementation phase. Depending on the results of the prototype test, more iterations or alternative designs may be needed. Uses and adaptations Contextual design has primarily been used for the design of computer information systems, including hardware, software. Parts of contextual design have been adapted for use as a usability evaluation method and for contextual application design. Contextual design has also been applied to the design of digital libraries and other learning technologies, and the design of a COVID-19 vaccine clinic mobile app. Contextual design has also been used as a means of teaching user-centered design/Human–computer interaction at the university level. A more lightweight approach to contextual design has been developed by its originators to address an oft-heard criticism that the method is too labor-intensive or lengthy for some needs. Yet others find the designer/user engagement promoted by contextual design to be too brief. References External links Karen Holtzblatt and Hugh Beyer: "Contextual Design", in: Soegaard, Mads and Dam, Rikke Friis (eds.). The Encyclopedia of Human–Computer Interaction, 2nd Ed. Aarhus, Denmark: The Interaction Design Foundation, 2014. InContext: "Contextual Design" (InContext was founded by Karen Holtzblatt and Hugh Beyer). "Contextual inquiry" on UsabilityNet. Hostrings: Contextual Design and Development is the Driving Force Behind all Successful Mobile Applications. Human–computer interaction Design
0.788617
0.957461
0.75507
Herrmann Brain Dominance Instrument
The Herrmann Brain Dominance Instrument (HBDI) is a system to measure and describe thinking preferences in people, developed by William "Ned" Herrmann while leading management education at General Electric's Crotonville facility. It is a type of cognitive style measurement and model, and is often compared to psychological pseudoscientific assessments such as the Myers-Briggs Type Indicator, Learning Orientation Questionnaire, DISC assessment, and others. Brain dominance model In his brain dominance model, Herrmann identifies four different modes of thinking: A. Analytical thinking Key words: logical, factual, critical, technical, quantitative. Preferred activities: collecting data, analysis, understanding how things work, judging ideas based on facts, criteria and logical reasoning. B. Sequential thinking Key words: safekeeping, structured, organized, complexity or detailed, planned. Preferred activities: following directions, detail-oriented work, step-by-step problem solving, organization, implementation. C. Interpersonal thinking Key words: kinesthetic, emotional, spiritual, sensory, feeling. Preferred activities: listening to and expressing ideas, looking for personal meaning, sensory input, group interaction. D. Imaginative thinking Key words: visual, holistic, intuitive, innovative, conceptual. Preferred activities: looking at the big picture, taking initiative, challenging assumptions, visuals, metaphoric thinking, creative problem solving, long-term thinking. His theory was based on theories of the modularity of cognitive functions, including well-documented specializations in the brain's cerebral cortex and limbic systems, and the research into left-right brain lateralization by Roger Wolcott Sperry, Robert Ornstein, Henry Mintzberg, and Michael Gazzaniga. These theories were further developed to reflect a metaphor for how individuals think and learn. Use of that metaphor brought later criticism by brain researchers such as Terence Hines for being overly simplistic, though advocates argue that the metaphorical construct has been beneficial in organizational contexts including business and government. Herrmann also coined the concept Whole Brain Thinking as a description of flexibility in using thinking styles that one may cultivate in individuals or in organizations allowing the situational use of all four styles of thinking. The Herrmann Brain Dominance Instrument The format of the instrument is a 116-question online assessment, which determines the degree of preference for each of the model's four styles of thinking. More than one style may be dominant (or a primary preference) at once in this model. For example, in Herrmann's presentation a person may have strong preferences in both analytical and sequential styles of thinking but lesser preferences in interpersonal or imaginative modes, though he asserts all people use all styles to varying degrees. A 1985 dissertation by C. Bunderson, currently CEO of the non-profit EduMetrics Institute asserts that "four stable, discrete clusters of preference exist", "scores derived from the instrument are valid indicators of the four clusters", and "The scores permit valid inferences about a person's preferences and avoidances for each of these clusters of mental activity". Consulting and training Based on the HBDI Assessment and Whole Brain model, Herrmann International and its global affiliates offer consulting and solutions (including workshops, programs, books and games) to improve personal or group communication, creativity, and other benefits. Critiques Self reporting Measurements that require people to state preferences between terms have received criticism. Researchers C. W. Allinson and J. Hayes, in their own 1996 publication of a competing cognitive style indicator called Cognitive Style Index in the peer reviewed Journal of Management Studies, noted that "there appears to be little or no published independent evaluation of several self-report measures developed as management training tools. [including] Herrmann Brain Dominance Instrument." However, some find usefulness in self reporting measurements. Researchers G.P. Hodgkinson and E. Sadler-Smith in 2003 found cognitive style indicators generally useful for studying organizations. However, in a critique of the Cognitive Style Index indicator they opined that progress in the field had been "hampered by a proliferation of alternative constructs and assessment instruments" many unreliable with a lack of agreement over nomenclature. To measure self-report consistency, a differential item functioning review of HBDI was published in 2007 by Jared Lees. However, his tests were supported by EduMetrics, a company on contract with Herrmann International to evaluate the system, and were therefore not completely independent. Lateralization Herrmann International describes an underlying basis for HBDI in the lateralization of brain function theory championed by Gazzaniga and others that associates each of the four thinking styles with a particular locus in the human brain. Analytical and sequential styles are associated with left brain and interpersonal and imaginative styles are associated with right brain, for example. Ned Herrmann described dominance of a particular thinking style with dominance with a portion of a brain hemisphere. The notion of hemisphere dominance attracted some criticism from the neuroscience community, notably by Terence Hines who called it "pop psychology" based on unpublished EEG data. He asserts that current literature instead found that both hemispheres are always involved in cognitive tasks and attempting to strengthen a specific hemisphere does not improve creativity, for example. Hines stated "No evidence is presented to show that these 'brain dominance measures' measure anything related to the differences between the two hemispheres. In other words, no evidence of validity [of hemisphere dominance] is presented.". Creativity Herrmann offered creativity workshops based on leveraging all the quadrants within the Whole Brain Model, rather than focusing on physiological attributes. strengthening particular thinking styles and strengthening the right hemisphere, which received critiques that creativity is not localized to a particular thinking style nor to a particular hemisphere. A study published in the peer reviewed Creativity Research Journal in 2005 by J. Meneely and M. Portillo agreed that creativity is not localized into a particular thinking style, such as a right-brain dominance resulting in more creativity. They did however find correlation between creativity in design students based on how flexible they were using all four thinking styles equally as measured by the HBDI. When students were less entrenched in a specific style of thinking they measured higher creativity using Domino's Creativity Scale (ACL-Cr). References Allinson, C.W., & Hayes, J. (1996) 'Cognitive Style Index: A measure of intuition-analysis for organizational research', Journal of Management Studies, 33:1 January 1996 Bentley, Joanne and Hall, Pamela (2001) Learning Orientation Questionnaire correlation with the Herrmann Brain Dominance Instrument: A validity study Dissertation Abstracts International Section A: Humanities and Social Sciences, Vol 61(10-A), Apr 2001. pp. 3961. Deardorff, Dale S. (2005) An exploratory case study of leadership influences on innovative culture: A descriptive study Dissertation Abstracts International: Section B: The Sciences and Engineering, Vol 66(4-B), 2005. pp. 2338. DeWald, R. E. (1989). Relationships of MBTI types and HBDI preferences in a population of student program managers (Doctoral dissertation, Western Michigan University, 1989). Dissertation Abstracts International, 50(06), 2657B. (University Microfilms No. AAC89-21867) Herrmann, Ned (1999) The Theory Behind the HBDI and Whole Brain Technology pdf Hines, Terence (1991) 'The myth of right hemisphere creativity.' Journal of Creative Behavior, Vol 25(3), 1991. pp. 223–227. Hines, Terence (1987) 'Left Brain/Right Brain Mythology and Implications for Management and Training', The Academy of Management Review, Vol. 12, No. 4, October 1987 Hines, Terence (1985) 'Left brain, right brain: Who's on first?' Training & Development Journal, Vol 39(11), Nov 1985. pp. 32–34. [Journal Article] Hodgkinson, Gerard P., and Sadler-Smith, Eugene (2003) Complex or unitary? A critique and empirical re-assessment of the Allinson-Hayes Cognitive Style Index., Journal of Occupational and Organizational Psychology, 09631798, 20030601, Vol. 76, Issue 2 Holland, Paul W. and Wainer, Howard (1993) Differential Item Functioning Krause, M. G. (1987, June). A comparison of the MBTI and the Herrmann Participant Survey. Handout from presentation at APT-VII, the Seventh Biennial International Conference of the Association for Psychological Type, Gainesville, FL. Lees, Jared A. (2007) Differential Item Functioning Analysis of the Herrmann Brain Dominance Instrument Masters Thesis, Brigham Young University - on ScholarsArchive McKean, K. (1985) 'Of two minds: Selling the right brain.', Discover, 6(4), pp. 30–41. Meneely, Jason; and Portillo, Margaret; (2005) The Adaptable Mind in Design: Relating Personality, Cognitive Style, and Creative Performance. Creativity Research Journal, Vol 17(2-3), 2005. pp. 155–166. [Journal Article] Wilson, Dennis H. (2007) A comparison of the Herrmann Brain Dominance Instrument(TM) and the extended DISCMRTM behavior profiling tool: An attempt to create a more discerning management perspective. Dissertation Abstracts International Section A: Humanities and Social Sciences, Vol 68(3-A). pp. 1079. Further reading Ned Herrmann (1990) The Creative Brain, Brain Books, Lake Lure, North Carolina. . . Ned Herrmann (1996) The Whole Brain Business Book, McGraw-Hill, New York, NY. . . Giannini, John L. (1984) Compass of the Soul: Archetypal Guides to a Fuller Life. . . Edward Lumsdaine, M. Lumsdaine (1994) Creative Problem Solving, McGraw-Hill . . Peter Ferdinand Drucker, David Garvin, Dorothy Leonard, Susan Straus, and John Seely Brown. (1998). Harvard Business Review on Knowledge Management. Harvard Business School Press . . Sala, Sergio Della, Editor (1999). Mind Myths: Exploring Popular Assumptions About the Mind and Brain, J. Wiley & Sons, New York. . . Human resource management
0.770145
0.980422
0.755067
GNS theory
GNS theory is an informal field of study developed by Ron Edwards which attempts to create a unified theory of how role-playing games work. Focused on player behavior, in GNS theory participants in role-playing games organize their interactions around three categories of engagement: Gamism, Narrativism and Simulation. The theory focuses on player interaction rather than statistics, encompassing game design beyond role-playing games. Analysis centers on how player behavior fits the above parameters of engagement and how these preferences shape the content and direction of a game. GNS theory is used by game designers to dissect the elements which attract players to certain types of games. History GNS theory was inspired by the threefold model idea, from discussions on the rec.games.frp.advocacy group on Usenet in summer 1997. The Threefold Model defined drama, simulation and game as three paradigms of role-playing. The name "Threefold Model" was coined in a 1997 post by Mary Kuhner outlining the theory. Kuhner posited the main ideas for theory on Usenet, and John H. Kim later organized the discussion and helped it grow. In his article "System Does Matter", which was first posted to the website Gaming Outpost in July 1999, Ron Edwards wrote that all RPG players have one of three mutually-exclusive perspectives. According to Edwards, enjoyable RPGs focus on one perspective and a common error in RPG design is to try to include all three types. His article could be seen as a warning against generic role-playing game systems from large developers. Edwards connected GNS theory to game design, which helped to popularize the theory. On December 2, 2005, Edwards closed the forums on the Forge about GNS theory, saying that they had outlived their usefulness. Aspects Gamism A gamist makes decisions to satisfy predefined goals in the face of adversity: to win. Edwards wrote, These decisions are most common in games pitting characters against successively-tougher challenges and opponents, and may not consider why the characters are facing them in the first place. Gamist RPG design emphasizes parity; all player characters should be equally strong and capable of dealing with adversity. Combat and diversified options for short-term problem solving (for example, lists of specific spells or combat techniques) are frequently emphasized. Randomization provides a gamble, allowing players to risk more for higher stakes rather than modelling probability. Examples include Magic: The Gathering, chess and most computer games. Narrativism Narrativism relies on outlining (or developing) character motives, placing characters into situations where those motives conflict and making their decisions the driving force. For example, a samurai sworn to honor and obey his lord might be tested when directed to fight his rebellious son; a compassionate doctor might have his charity tested by an enemy soldier under his care; or a student might have to decide whether to help her best friend cheat on an exam. This has two major effects. Characters usually change and develop over time, and attempts to impose a fixed storyline are impossible or counterproductive. Moments of drama (the characters' inner conflict) make player responses difficult to predict, and the consequences of such choices cannot be minimized. Revisiting character motives or underlying emotional themes often leads to escalation: asking variations of the same "question" at higher intensity levels. Simulationism Simulationism is a playing style recreating, or inspired by, a genre or source. Its major concerns are internal consistency, analysis of cause and effect and informed speculation. Characterized by physical interaction and details of setting, simulationism shares with narrativism a concern for character backgrounds, personality traits and motives to model cause and effect in the intellectual and physical realms. Simulationist players consider their characters independent entities, and behave accordingly; they may be reluctant to have their character act on the basis of out-of-character information. Similar to the distinction between actor and character in a film or play, character generation and the modeling of skill growth and proficiency can be complex and detailed. Many simulationist RPGs encourage illusionism (manipulation of in-game probability and environmental data to point to predefined conclusions) to create a story. Call of Cthulhu recreates the horror and humanity's cosmic insignificance in the Cthulhu Mythos, using illusionism to craft grisly fates for the players' characters and maintain consistency with the source material. Simulationism maintains a self-contained universe operating independent of player will; events unfold according to internal rules. Combat may be broken down into discrete, semi-randomised steps for modeling attack skill, weapon weight, defense checks, armor, body parts and damage potential. Some simulationist RPGs explore different aspects of their source material, and may have no concern for realism; Toon, for example, emulates cartoon hijinks. Role-playing game systems such as GURPS and Fudge use a somewhat-realistic core system which can be modified with sourcebooks or special rules. Terminology GNS theory incorporates Jonathan Tweet's three forms of task resolution which determine the outcome of an event. According to Edwards, an RPG should use a task-resolution system (or combination of systems) most appropriate for that game's GNS perspective. The task-resolution forms are: Drama/destiny: Participants decide the results, with plot requirements the determining factor (for example, Houses of the Blooded). Fortune/chance: Chance decides the results (for example, dice). Karma/fate: A fixed value decides the results (for example, Nobilis statistics comparison. Jenna K. Moran's work frequently takes inspiration from software development methodologies). Edwards has said that he changed the name of the Threefold Model's "drama" type to "narrativism" in GNS theory to avoid confusion with the "drama" task-resolution system. GNS theory identifies five elements of role-playing: Character: A fictional person Color: Details providing atmosphere Setting: Location in space and time Situation: The dilemma System: Determines how in-game events unfold It details four stances the player may take in making decisions for their character: Actor: Decides based on what their character wants and knows Author: Decides based on what they want for their character, retrospectively explaining why their character made a decision Director: Makes decisions affecting the environment instead of a character (usually represented by a gamemaster in an RPG) Pawn: Decides based on what they want for their character, without explaining why their character made a decision Criticism Brian Gleichman, a self-identified Gamist whose works Edwards cited in his examination of Gamism, wrote an extensive critique of the GNS theory and the Big Model. He states that although any RPG intuitively contains elements of gaming, storytelling, and self-consistent simulated worlds, the GNS theory "mistakes components of an activity for the goals of the activity", emphasizes player typing over other concerns, and assumes "without reason" that there are only three possible goals in all of role-playing. Combined with the principles outlined in "System Does Matter", this produces a new definition of RPG, in which its traditional components (challenge, story, consistency) are mutually exclusive, and any game system that mixes them is labeled as "incoherent" and thus inferior to the "coherent" ones. To disprove this, Gleichman cites a survey conducted by Wizards of the Coast in 1999, which identified four player types and eight "core values" (instead of the three predicted by the GNS theory) and found that these are neither exclusive, nor strongly correlated with particular game systems. Gleichman concludes that the GNS theory is "logically flawed", "fails completely in its effort to define or model RPGs as most people think of them", and "will produce something that is basically another type of game completely". Gleichman also states that just as the Threefold Model (developed by self-identified Simulationists who "didn't really understand any other style of player besides their own") "uplifted" Simulation, Edwards' GNS theory "trumpets" its definition of Narrativism. According to him, Edwards' view of Simulationism as "'a form of retreat, denial, and defense against the responsibilities of either Gamism or Narrativism'" and characterization of Gamism as "being more akin to board games" than to RPGs, reveals an elitist attitude surrounding the narrow GNS definition of narrative role-playing, which attributes enjoyment of any incompatible play-style to "'[literal] brain damage'". Lastly, Gleichman states that most games rooted in the GNS theory, e.g. My Life with Master and Dogs in the Vineyard, "actually failed to support Narrativism as a whole, instead focusing on a single Narrativist theme", and have had no commercial success. Fantasy author and Legend of the Five Rings contributor Marie Brennan reviews the GNS theory in the eponymous chapter of her 2017 non-fiction book Dice Tales. While she finds many of its "elaborations and add-ons that accreted over the years... less than useful", she suggests that the "core concepts of GNS can be helpful in elucidating some aspects of [RPGs], ranging from game design to the disputes that arise between players". A self-identified Narrativist, Brennan finds Edwards' definition of said creative agenda ("exploration of theme") too narrow, adding "character development, suspense, exciting plot twists, and everything else that makes up a good story" to the Narrativist priorities list. She concludes that rather than being a practical guide, GNS is more useful for explaining the general ideas of role-playing and especially "for understanding how gamers behave". The role-playing game historian Shannon Appelcline (author of Designers & Dragons) drew parallels between three of his contemporary commercial categories of RPG products and the three basic categories of GNS. He posited that "OSR games are largely gamist and indie games are largely narrativist", while "the mainstream games... tend toward simulationist on average", and cautiously concluded that this "makes you think that Edwards was on to something". Noted participant of the Forge, contributor to GNS theory, and developer of many role-playing games, Vincent Baker, has said that "the model is obsolete," and discussed that trying to fit play into the boxes provided by the model may contribute to misunderstanding it. See also Bartle taxonomy of player types Gamification References External links "GNS and Other Matters of Role-playing Theory" by Ron Edwards "A Look at Gamist-Narrativist-Simulationist Theory" by Nathan Jennings Role-playing game terminology Game studies
0.772171
0.977796
0.755026
Polycrisis
The term polycrisis, also referred to as a metacrisis or permacrisis, describes a complex situation where multiple, interconnected crises converge and amplify each other, resulting in a predicament which is difficult to manage or resolve. Unlike single crises which may have clear causes and solutions, a polycrisis involves overlapping and interdependent issues, making it a more pervasive and enduring state of instability. This concept reflects growing concerns about the sustainability and viability of contemporary socio-economic, political, and ecological systems. The concept was coined in the 1990s but became popular in the 2020s to refer to the effects of the COVID-19 pandemic, war, surging debt levels, inflation, climate change, resource depletion, growing inequality, artificial intelligence and synthetic biology, and democratic backsliding. Critics of the term have characterized it as a buzzword or a distraction from more concrete causes of the crises. Background The idea of a polycrisis has its roots in the recognition that modern societies face not just isolated problems but a series of interconnected challenges that could lead to cascading failures if not addressed as such. The term emphasizes the multifaceted nature of these crises, which can include economic inequality, political instability, environmental degradation, and social unrest, all reinforcing one another. The interconnectedness of these crises means that solutions in one area can often lead to unintended consequences in another, creating a feedback loop that exacerbates the overall situation. The concept of polycrisis captures the complexity and interconnectedness of the challenges facing humanity in the 21st century. It underscores the need for new ways of thinking and acting that go beyond traditional problem-solving methods. As humanity grapples with multiple, overlapping crises, the recognition of polycrisis offers both a warning and an opportunity to forge a more sustainable and resilient future. Components Ecological overshoot & limits to growth The concept of polycrisis aligns with the warnings issued in the Limits to Growth report, which suggested that unchecked economic growth and resource consumption would eventually surpass the Earth's carrying capacity. Human ecological overshoot—using resources faster than they can be replenished—has led to environmental degradation, climate change, and biodiversity loss, which in turn threaten the stability and continuity of human societies. Socio-political instability During the late 20th and early 21st centuries, it has become increasingly evident that liberal democracies exhibit stark internal contradictions, such as that of egalitarian ideals versus imperialistic practices, which undermine their legitimacy as leaders of the "rules-based" liberal international order. The rise of right-wing populism and the erosion of the Western social contract reflect a growing popular dissatisfaction with the political and economic systems in the West. These political shifts are often fueled by economic inequalities, perceived threats to national identity and social status, and disillusionment with traditional political elites. Technological & economic disparities The concentration of wealth and power among a small elite, as highlighted in works like Douglas Rushkoff's Survival of the Richest, contributes to the polycrisis by exacerbating social inequalities and undermining potential collective action to address the issues. The increasing gap between the wealthy and the rest of society raises questions about the sustainability of current economic models and the fairness of technological advancements that primarily benefit the elite. Philosophical & existential dimensions The polycrisis also involves a deeper, philosophical reckoning with humanity's place in the world. As articulated in Vanessa Machado de Oliveira’s Hospicing Modernity, there is a small but growing awareness of the limits of human control and the need to accept ecological and biological realities. This fundamentally challenges the anthropocentric and individualistic narratives that have historically underpinned Western thought. Responses & criticism Critics of the polycrisis narrative argue that it can lead to fatalism and inaction, suggesting instead a focus on practical, incremental changes that can build resilience and adaptability. Various thought leaders and figureheads in the technology space have aligned themselves with effective accelerationism and have forcefully critiqued concepts related to the polycrisis, arguing that the way to solve most, if not all, of the problems facing humanity is through further economic growth and the acceleration of tech development and deployment. In 2023, venture capitalist and tech magnate Marc Andreessen published the Techno-Optimist Manifesto, arguing that technology is what creates wealth and happiness. Various scholars and thought leaders have proposed different frameworks for understanding and responding to the polycrisis. Some advocate for a radical rethinking of modernity and a transition towards more sustainable and equitable ways of living. This includes adopting ecological wisdom from Indigenous cultures, reimagining economic systems, and embracing a deeper connection with the natural world. See also References Crisis
0.769555
0.981075
0.754991
Open educational practices
Open educational practices (OEP) are part of the broader open education landscape, including the openness movement in general. It is a term with multiple layers and dimensions and is often used interchangeably with open pedagogy or open practices. OEP represent teaching and learning techniques that draw upon open and participatory technologies and high-quality open educational resources (OER) in order to facilitate collaborative and flexible learning. Because OEP emerged from the study of OER, there is a strong connection between the two concepts. OEP, for example, often, but not always, involve the application of OER to the teaching and learning process. Open educational practices aim to take the focus beyond building further access to OER and consider how in practice, such resources support education and promote quality and innovation in teaching and learning. The focus in OEP is on reproduction/understanding, connecting information, application, competence, and responsibility rather than the availability of good resources. OEP is a broad concept which can be characterised by a range of collaborative pedagogical practices that include the use, reuse, and creation of OER and that often employ social and participatory technologies for interaction, peer-learning, knowledge creation and sharing, empowerment of learners, and open sharing of teaching practices. OEP may involve students participating in online, peer production communities within activities intended to support learning or more broadly, any context where access to educational opportunity through freely available online content and services is the norm. Such activities may include (but are not limited to), the creation, use and repurposing of open educational resources and their adaptation to the contextual setting. OEP can also include the open sharing of teaching practices and aim "to raise the quality of education and training and innovate educational practices on an institutional, professional and individual level." The OEP community includes educational professionals (i.e. teachers, educational developers, researchers), policy makers, managers/administrators of organisations, and learners. OER are often created as part of an OEP strategy, and viewed as a contribution to the transformation of 21st century learning and learners. Scope of open educational practices Open educational practices fall under the broader movement of openness in education, which is an evolving concept shaped by the shifting needs and available resources of societies, cultures, geographies, and economies. Developing a precise definition, thus, is a challenge. OEP are sometimes used interchangeably with the term open educational pedagogies. While OEP are inclusive of open pedagogies represented by teaching techniques, OEP can also incorporate open scholarship, open course design, open educational advocacy, social justice, open data, ethics, and copyright." Creating a database or repository of open educational resources is not open educational practice (Ehlers 2011) but can be part of an open teaching strategy. OEP can be grounded in the concept of open pedagogies as described by Hegarty which include: Participatory Technologies People, Openness, Trust Innovation and Creativity Sharing Ideas and Resources Connected Community Learner-Generated Reflective Practice Peer Review Nascimbeni & Burgos (2016) offer a definition that identifies activities such as course design, content creation, pedagogy, and assessment design as areas for infusing OEP. Paskevicius provides an alternative definition:Teaching and learning practices where openness is enacted within all aspects of instructional practice; including the design of learning outcomes, the selection of teaching resources, and the planning of activities and assessment. OEP engage both faculty and students with the use and creation of OER, draw attention to the potential afforded by open licences, facilitate open peer-review, and support participatory student-directed projects. Definitions While a canonical definition of open educational practice does not exist, various groups and scholars have provided their interpretation or viewpoint. A definition used by others either in its entirety or as basis for further development is provided by the Ehlers, who defines OEP "as practices which support the (re)use and production of OER through institutional policies, promote innovative pedagogical models, and respect and empower learners as co-producers on their lifelong learning path". Here is a list of some other OEP definitions. The Open Educational Quality (OPAL) Initiative define open educational practices as "the use of Open Educational Resources to raise the quality of education and training and innovate educational practices on institutional, professional and individual level". The International Council for Open and Distance Education (ICDE): "Open Educational Practices are defined as practices which support the production, use and reuse of high quality open educational resources (OER) through institutional policies, which promote innovative pedagogical models, and respect and empower learners as co-producers on their lifelong learning path". The UK OER support and evaluation team suggest that (compared to ICDE) "a broader definition would encompass all activities that open up access to educational opportunity, in a context where freely available online content and services (whether 'open', 'educational' or not) are taken as the norm". The Institute for the Study of Knowledge Management in Education (ISKME) defines Open Educational Practices (OEP) as comprising a set of skills in collaboration, curation, curricular design, and leadership around the use of Open Educational Resources. OEP build educator capacity for using OER to improve curriculum, instruction, and pedagogy, and to gain skills in digital resource curation and curriculum creation, and to actively collaborate around and advocate for innovative approaches to open education and OER. ISKME developed the Open Educational Practice Rubric to articulate key learning objectives for integrating OER and open educational practice into teaching and learning improvement and leadership. The Center for Open Learning and Teaching (University of Mississippi) state that "Open Educational Practices (OEP) are teaching techniques that introduce students to online peer production communities. Such communities (for instance, Wikipedia, YouTube, Open Street Map) host dynamic communities and offer rich learning environments". The European Foundation for Quality in e-Learning (EFQUEL) write that Open Educational Practices are "the next phase in OER development which will see a shift from a focus on resources to a focus on open educational practices being a combination of open resources use and open learning architectures to transform learning into 21st century learning environments in which universities’, adult learners and citizens are provided with opportunities to shape their lifelong learning pathways in an autonomous and self-guided way". The Cape Town Open Education Declaration (with over 2,500 signatories) reads: "open education is not limited to just open educational resources. It also draws upon open technologies that facilitate collaborative, flexible learning and the open sharing of teaching practices that empower educators to benefit from the best ideas of their colleagues. It may also grow to include new approaches to assessment, accreditation and collaborative learning". OEP areas Best practice case studies identify a number of OEP areas. These areas surround the following topics, with other studies identifying categories and elements of open educational practices. Topics Using OER Innovation Learning Improving Quality Something Else Categories open educational resources open/public pedagogies open learning open scholarship open sharing (of teaching practice) open technologies Elements Infrastructure (tools) OER Use Open Design Adoption Policy Impact Some scholars claim that the breadth of definitions through which OEP are described impairs researchers' ability to measure the impact of OEP. Others, however, have undertaken projects exploring and documenting OEP which demonstrate potential areas of impact. For instance, adopting OEP can lead to opportunities for collaborative learning through the affordances of Web 2.0 tools. OEP can support innovative pedagogy as an extension of teaching and learning practices. In this context, open also refers to the learning environment where learner's set their own objectives rather than being restricted by those set externally (a closed environment). Additionally, OEP has shown potential for use in addressing social justice issues through provision of increased access, modification for inclusion of diverse voices, and democratization of scholarly conversations. OEP and collaborative learning The presence of a shared knowledge creation experience is one characteristic included in most definitions of OEP. The networked participation which takes place as learners work together in a community to create knowledge can result in increased student engagement. Artifacts created contribute to the community beyond the walls of the classroom, something described in Knowledge Building theory as adding value to student work even beyond its use as an evaluation of student understanding. OEP and innovative pedagogy Much of the impact of OEP is a result of the "transformational role" of the collaboration taking place between instructors and students. Open educational practices can also provide the experience and tools to help bridge the gap between formal and informal learning, and potentially an open source curriculum or emergent curriculum. Use of these tools and experience facilitate innovative pedagogical practices resulting in benefits beyond simply mastering course content. For instance, Nusbaum describes a project in which students were invited to modify the openly licensed textbook being used in their psychology course. These student modifications diversified the content and helped create a resource more reflective of the context in which the students were taking the class. OEP and social justice Research continues to document the impact of OEP in addressing social justice issues. Cronin and McLaren found the incorporation of OEP can lead to increased awareness, use, and creation of open educational resources, alleviating high textbook costs which create barriers to education for some students. Student voices incorporated through OEP can diversify the content of the course, teaching, learning, and research materials. Nusbaum (2020) found that diversifying content through OEP contributes to an improved sense of belonging for subsequent students using the resource. Embracing OEP offer advantages regarding social justice, but it is important to think about social justice critically. For doing this the work of Lambert (2018), Hodgkinson-Williams and Trotter and Bali et al. is enlightening. Lambert, for example, argues that technology driven initiatives can help only when there is awareness and willingness from societies, governments and individuals to make changes to address social injustices. Open education in online learning does not provide affordability to disadvantaged learners by default, this needs to be embedded with care and awareness. She introduces her own definition of OEP framed under a social justice perspective:Open Education is the development of free digitally enabled learning materials and experiences primarily by and for the benefit and empowerment of non-privileged learners who may be under-represented in education systems or marginalised in their global context. Success of social justice aligned programs can be measured not by any particular technical feature or format, but instead by the extent to which they enact redistributive justice, recognitive justice and/or representational justiceThis definition includes the idea of social injustice, which is taken from Fraser's work on abnormal justice. Fraser created a tripartite model of justice based on three pillars, redistribution, recognition and representation. These pillars are taken by Lambert and reinterpreted in the context of open education: Redistributive justice – This dimension is related with economically inequalities. It involves the allocation of free educational resources or human resources to earners who otherwise cannot afford them. If possible additional free support for learners should be available Recognitive justice – This dimension is concerned with cultural inequities and it involves respect and recognition for cultural and gender differences. There is a duty to recognise cultural, ethic, religious diversity when designing the curriculum, everybody should feel recognised in the curriculum. The implication of this principle would be to design educational material with learners and their needs in mind. Representational justice – This dimension is related with political exclusion in education. It is based on principle of self-determination, where disadvantaged and marginalised groups should present their own stories themselves, rather than told by others. This involves equitable representation and political voice. This implies to design with representatives of the community if possible. Level of openness Levels of openness in educational practice can be seen on a continuum, on a continual decision making process, and not universally experienced. The trajectory toward OEP happens between the application of open pedagogical models, use of OER, and creation of OER creation in a range from low to high: Low - teachers believe they know what learners have to learn. A focus on knowledge transfer Medium - Predetermined Objectives (closed environment) but, using open pedagogical models and encourage dialogue and problem-based learning. High - Learning Objectives and pathways highly governed by learners. Those engaged in OEP are negotiating between competing issues when making pedagogical decisions at the macro (global) level, meso (community/ network level), micro (individual level), and the nano (interaction) levels. At each of these levels, individuals ask themselves questions such as: Will I share openly? Whom will I share with? Who will I share as? Will I share this? Initiatives The OPAL Consortium The Open Educational Quality (OPAL) Initiative define open educational practices as "the use of Open Educational Resources (OER) to raise the quality of education and training and innovate educational practices on institutional, professional and individual level". For the mainstreaming of open educational practices OPAL recommends: Enabling Legislation to Facilitate OEP Incentivising OEP through Legislation Reducing Legislative Burdens through Harmonisation Rethinking Intellectual Property Law for the 21st Century Empowering Learners to take up OEP Addressing Fragmentation in Learning Resources Promoting the provision of Open Educational Assessment Strengthening the Evidence-Base of OEP Helping institutions nurture OEP Addressing Sustainability Concerns Making the Societal Benefit Explicit Culturing Innovation through Networks Supporting Truly Open Collaboration Building a Coalition of Stakeholders around Principles of Openness Improving Trust in OEP Integrate OEP into Institutional Quality Procedures Create Open Academic/Scientific Trust Infrastructures The International Council for Open and Distance Education sees OEP as those practices which support the production, use and reuse of high quality open educational resources and regards that OEP are often achieved through institutional policies, which promote innovative pedagogical models, and respect. Learners are empowered as co-producers on their lifelong learning path. The scope of OEP covers all areas of OER governance: policy makers, managers and administrators of organizations, educational professionals and learners. The OLCOS Consortium Open e-Learning Content Observatory Services (OLCOS) project is a Transversal Action under the European eLearning Programme. The OLCOS Roadmap focuses on Open Educational Practices, providing orientation and recommendations to educational decision makers on how to develop the use of OER. To further benefit from OERs one needs to better understand how their role could promote innovation and change in educational practices. The Roadmap states that delivering OER to the dominant model of teacher-centred knowledge transfer will have little effect in equipping teachers, students and workers with the knowledge and skills required in the knowledge economy and, lifelong learning. Downloading Web-accessible, open teaching materials for classes and, continuing a one-way channel of content provision, will likely mirror the little impact achieved with regard to changing educational practices following the massive investments in the e-learning infrastructure by educational institutions. Open Educational Practices aim to deliver a competency-focused, constructivist paradigm of learning and promote a creative and collaborative engagement with digital content, tools and services to meet knowledge and skills required today. SCORE The Support Centre for Open Resources in Education (SCORE) at the Open University (UK) was the second major initiative to be funded by the Higher Education Funding Council for England (Hefce). (The first was the UKOER programme, jointly run by the Joint Information Systems Committee (JISC) and the Higher Education Academy (HEA)). Discussions and actions were moving on from how open educational resources are published to how they are used. Placing OER as an enabler, within a wider set of open educational practices. Over a period of three years, SCORE, initiated a series of activities and events that involved several hundred educational practitioners from the majority of the higher education institutions in England. There has been interest in how educational practitioners would accept and embed open resources into their practices. Sharing is at the heart of the philosophy OER and probably OEP and thus collective and cooperative activities between people and institutions are likely to be a key factor in the sustainability of such practices. SCORE reports it succeeded in raising the profile of OER and OEP within UK higher education institutions by assisting existing communities of practice and by creating new communities of practice to form a much larger network of practice that will be sustained by its participants. Challenges There are many challenges to the adoption of open educational practices. Certain aspects like technology have received greater attention than others but all of the factors below inhibit widespread use of open educational practices: Technology - Lack of or insufficient investment in broadband access as well as up-to-date software and hardware Business Model - OER and OEP can incur a significant provider cost. Typically financial models focus on technology, but they also need to account for staff; i.e., those who create, reuse, mix, and modify the content. Law & Policy - There is either ignorance on open access licenses, such as Creative Commons license and GNU General Public License, and/or restrictive intellectual property rights that limit the development of OEP. Pedagogy - Traditional models of learning are teacher-centric where teachers dispense knowledge to students, and teachers/professors may not know how to integrate OEP into courses. Quality Assessment - There is not a quick and universal way to assess the quality of OER. MERLOT, based on the academic peer review process, has only reviewed 14% of submitted material. Cultural Imperialism - There is the concern that Western institutions use OEP to design educational courses for developing countries. Addressing matters of social justice - This dimension of OEP entails deliberate work, it needs to be a dimension that is built in the practice and OERs informed by theories of social justice. Therefore a dedicated section on social justice in OEP is needed. Strategies and recommendations In order for there to be widespread adoption of OEP, legal and educational policy must change and OEP needs to become sustainable. Funding - Develop a sustainable funding model for OEP that addresses technology and staffing. Various funding models being explored and examples: Endowment model, e.g. the Stanford Encyclopedia of Philosophy Project. Membership model, e.g. Sakai Educational Partners Program where member organizations pay a fee. Donations model, e.g. Wikipedia and Apache Foundation. Even though Apache has modified it so that there are fees for some services. Conversion model, e.g. Redhat, Ubuntu, SuSe. They convert free subscribers to paying customers for advanced features and support. Contributor pay model, e.g. Public Library of Science (PLoS) where contributors pay for the cost of maintaining the contribution. Sponsorship model, e.g. MIT iCampus Outreach Initiative, which is sponsored by Microsoft & China Open Resources for Education, and Stanford on iTunes, which is sponsored by Stanford & Apple. They are free for users with commercial messages by sponsors. Institutional model, e.g. MIT OpenCourseWare Project. Government model including UN programmes, e.g. Canada's SchoolNet Project. Partnership and exchange, e.g. Universities working together to create OER systems. Law & policy - In terms of law, there should be an open access mandate for partially or fully publicly funded research. Also teachers and researchers should be better informed about their intellectual property rights. Researchers and teachers who use public funding should sign non-exclusive copyrights so their institutions make their work available under appropriate licenses. Open advocates should demand public-private partnerships. Build stakeholders Quality assessment Pedagogy - Help teachers change to facilitate use of OEP to emphasize learners' developing competences, knowledge, and skills. Therefore, teaching is no longer educator-centric, but instead it focuses on what learners can do for themselves. See also Andragogy Digital literacy Edutopia Emergent curriculum Global SchoolNet OER Commons Open content Open educational resources policy Open.Michigan Open-source curriculum OpenLearn Connexions References External links Special issue of the ALSIC journal (2016) on how open practice can support the teaching and learning of languages. Special issue of the Distance Education journal on OEPs. In search for the Open Educator: Proposal of a definition and a framework to increase Openness adoption among university educators. Educational practices Open content Social justice
0.79116
0.954282
0.75499
Dialogic
Dialogic refers to the use of conversation or shared dialogue to explore the meaning of something. (This is as opposed to monologic which refers to one entity with all the information simply giving it to others without exploration and clarification of meaning through discussion.) The word "dialogic" relates to or is characterized by dialogue and its use. A dialogic is communication presented in the form of dialogue. Dialogic processes refer to implied meaning in words uttered by a speaker and interpreted by a listener. Dialogic works carry on a continual dialogue that includes interaction with previous information presented. The term is used to describe concepts in literary theory and analysis as well as in philosophy. Along with dialogism, the term can refer to concepts used in the work of Russian philosopher Mikhail Bakhtin, especially the texts Problems of Dostoevsky's Poetics and The Dialogic Imagination: Four Essays by M.M. Bakhtin. Overview Bakhtin contrasts the dialogic and the "monologic" work of literature. The dialogic work carries on a continual dialogue with other works of literature and other authors. It does not merely answer, correct, silence, or extend a previous work, but informs and is continually informed by the previous work. Dialogic literature is in communication with multiple works. This is not merely a matter of influence, for the dialogue extends in both directions, and the previous work of literature is as altered by the dialogue as the present one is. Though Bakhtin's "dialogic" emanates from his work with colleagues in what we now call the "Bakhtin Circle" in years following 1918, his work was not known to the West or translated into English until the 1970s. For those only recently introduced to Bakhtin's ideas but familiar with T. S. Eliot, his "dialogic" is consonant with Eliot's ideas in "Tradition and the Individual Talent," where Eliot holds that "the past should be altered by the present as much as the present is directed by the past". For Bakhtin, the influence can also occur at the level of the individual word or phrase as much as it does the work and even the oeuvre or collection of works. A German cannot use the word "fatherland" or the phrase "blood and soil" without (possibly unintentionally) also echoing (or, Bakhtin would say "refracting") the meaning that those terms took on under Nazism. Every word has a history of usage to which it responds, and anticipates a future response. The term 'dialogic' does not only apply to literature. For Bakhtin, all language—indeed, all thought—appears as dialogical. This means that everything anybody ever says always exists in response to things that have been said before and in anticipation of things that will be said in response. In other words, we do not speak in a vacuum. All language (and the ideas which language contains and communicates) is dynamic, relational and engaged in a process of endless redescriptions of the world. Bakhtin also emphasized certain uses of language that maximized the dialogic nature of words, and other uses that attempted to limit or restrict their polyvocality. At one extreme is novelistic discourse, particularly that of a Dostoevsky (or Mark Twain) in which various registers and languages are allowed to interact with and respond to each other. At the other extreme would be the military order (or "1984" newspeak) which attempts to minimize all orientations of the work toward the past or the future, and which prompts no response but obedience. Distinction between dialogic and dialectic A dialogic process stands in contrast to a dialectic process (proposed by G. W. F. Hegel): In a dialectic process describing the interaction and resolution between multiple paradigms or ideologies, one putative solution establishes primacy over the others. The goal of a dialectic process is to merge point and counterpoint (thesis and antithesis) into a compromise or other state of agreement via conflict and tension (synthesis). "Synthesis that evolves from the opposition between thesis and antithesis." Examples of dialectic process can be found in Plato's Republic. In a dialogic process, various approaches coexist and are comparatively existential and relativistic in their interaction. Here, each ideology can hold more salience in particular circumstances. Changes can be made within these ideologies if a strategy does not have the desired effect. These two distinctions are observed in studies of personal identity, national identity, and group identity. Sociologist Richard Sennett has stated that the distinction between dialogic and dialectic is fundamental to understanding human communication. Sennett says that dialectic deals with the explicit meaning of statements, and tends to lead to closure and resolution. Whereas dialogic processes, especially those involved with regular spoken conversation, involve a type of listening that attends to the implicit intentions behind the speaker's actual words. Unlike a dialectic process, dialogics often do not lead to closure and remain unresolved. Compared to dialectics, a dialogic exchange can be less competitive, and more suitable for facilitating cooperation. See also Allusion Dialogic learning Dialogical analysis Dialogical self Heteroglossia Internal discourse Relational dialectics Notes References Literary concepts Postmodern theory Post-structuralism
0.768888
0.981891
0.754964
Naturalism (theatre)
Naturalism is a movement in European drama and theatre that developed in the late 19th and early 20th centuries. It refers to theatre that attempts to create an illusion of reality through a range of dramatic and theatrical strategies. Interest in naturalism especially flourished with the French playwrights of the time, but the most successful example is Strindberg's play Miss Julie, which was written with the intention to abide by both his own particular version of naturalism, and also the version described by the French novelist and literary theoretician, Emile Zola. Zola's term for naturalism is la nouvelle formule. The three primary principles of naturalism (faire vrai, faire grand and faire simple) are first, that the play should be realistic, and the result of a careful study of human behaviour and psychology. The characters should be flesh and blood; their motivations and actions should be grounded in their heredity and environment. The presentation of a naturalistic play, in terms of the setting and performances, should be realistic and not flamboyant or theatrical. The single setting of Miss Julie, for example, is a kitchen. Second, the conflicts in the play should be issues of meaningful, life-altering significance — not small or petty. And third, the play should be simple — not cluttered with complicated sub-plots or lengthy expositions. Darwinism pervades naturalistic plays, especially in the determining role of the environment on character, and as motivation for behavior. Naturalism emphasizes everyday speech forms; plausibility in the writing (no ghosts, spirits or gods intervening in the human action); a choice of subjects that are contemporary and reasonable (no exotic, otherworldly or fantastic locales, nor historical or mythic time-periods); an extension of the social range of characters portrayed (not only the aristocrats of classical drama but also bourgeois and working-class protagonists) and social conflicts; and a style of acting that attempts to recreate the impression of reality. Naturalism was first advocated explicitly by Émile Zola in his 1880 essay entitled Naturalism on the Stage. Influences Naturalistic writers were influenced by the theory of evolution of Charles Darwin. They believed that one's heredity and social environment determine one's character. Whereas realism seeks only to describe subjects as they really are, naturalism also attempts to determine "scientifically" the underlying forces (i.e. the environment or heredity) influencing the actions of its subjects. Naturalistic works are opposed to romanticism, in which subjects may receive highly symbolic, idealistic, or even supernatural treatment. They often include uncouth or sordid subject matter; for example, Émile Zola's works had a frankness about sexuality along with a pervasive pessimism. Naturalistic works exposed the dark harshness of life, including poverty, racism, sex, prejudice, disease, prostitution, and filth. As a result, Naturalistic writers were frequently criticized for being too blunt. Plays of naturalism Woyzeck (1837) by Georg Büchner - considered a forerunner to Naturalism A Bitter Fate (1859) by Aleksey Pisemsky The Power of Darkness (1886) by Leo Tolstoy The Father (1887) by August Strindberg Miss Julie (1888) by August Strindberg Creditors (1889) by August Strindberg The Weavers (1892) by Gerhart Hauptmann Drayman Henschel (1898) by Gerhart Hauptmann Uncle Vanya (1898) by Anton Chekhov The Cherry Orchard (1904) by Anton Chekhov A Doll’s House (1879) by Henrik Ibsen See also Naturalism (art) Naturalism (literature) Philosophical naturalism Sociological naturalism Realism in the arts Realism in theatre Kitchen sink drama Notes Further reading Banham, Martin, ed. 1998. The Cambridge Guide to Theatre. Cambridge: Cambridge University Press. . Counsell, Colin. 1996. Signs of Performance: An Introduction to Twentieth-Century Theatre. London and New York: Routledge. . Hagen, Uta. 1973. Respect for Acting. New York: Macmillan. . Hall, Peter. 2004. Shakespeare's Advice to the Players. London: Oberon. . Kolocotroni, Vassiliki, Jane Goldman and Olga Taxidou, eds. 1998. Modernism: An Anthology of Sources and Documents. Edinburgh: Edinburgh University Press. . Rodenberg, Patsy. 2002. Speaking Shakespeare. London: Methuen. . Stanislavski, Konstantin. 1936. An Actor Prepares. London: Methuen, 1988. . Weimann, Robert. 1978. Shakespeare and the Popular Tradition in the Theater: Studies in the Social Dimension of Dramatic Form and Function. Baltimore and London: The Johns Hopkins University Press. . Williams, Raymond. 1976. Keywords: A Vocabulary of Culture and Society. London: Fontana, 1988. . ---. 1989. The Politics of Modernism: Against the New Conformists. Ed. Tony Pinkney. London and New York: Verso. . ---. 1993. Drama from Ibsen to Brecht. London: Hogarth. . 19th-century theatre Literary movements Realism (art movement) Theatre
0.762588
0.989946
0.754921
Information and media literacy
Information and media literacy (IML) enables people to show and make informed judgments as users of information and media, as well as to become skillful creators and producers of information and media messages. IML is a combination of information literacy and media literacy. The transformative nature of IML includes creative works and creating new knowledge; to publish and collaborate responsibly requires ethical, cultural and social understanding. The term "media and information literacy" is used by UNESCO to differentiate the combined study from the existing study of information literacy. Renee Hobbs suggests that "few people verify the information they find online―both adults and children tend to uncritically trust information they found from whatever source." People need to gauge the credibility of information and can do so by answering three questions: Who is the author? What is the purpose of this message? How was this message constructed? Prior to the 1990s, the primary focus of information literacy was research skills. Media literacy, a study that emerged around the 1970s, traditionally focuses on the analysis and the delivery of information through various forms of media. These days, the study of information literacy has been extended to include the study of media literacy in many countries like the UK, Australia and New Zealand. It is also referred to as information and communication technologies (ICT) in the United States. Educators such as Gregory Ulmer have also defined the field as electracy. In the digital age The definition of literacy is "the ability to read and write". In practice many more skills are needed to locate, critically assess and make effective use of information. By extension, literacy now also includes the ability to manage and interact with digital information and media, in personal, shared and public domains. Historically, "information literacy" has largely been seen from the relatively top-down, organisational viewpoint of library and information sciences. However the same term is also used to describe a generic "information literacy" skill. The modern digital age has led to the proliferation of information spread across the Internet. Individuals must be able to recognize whether information is true or false and better yet know how to locate, evaluate, use, and communicate information in various formats; this is called information literacy. Towards the end of the 20th century, literacy was redefined to include "new literacies" relating to the new skills needed in everyday experience. "Multiliteracies" recognised the multiplicity of literacies, which were often used in combination. "21st century skills" frameworks link new literacies to wider life skills such as creativity, critical thinking, accountability. What these approaches have in common is a focus on the multiple skills needed by individuals to navigate changing personal, professional and public "information landscapes". As the conventional definition of literacy itself continues to evolve among practitioners, so too has the definition of information literacies. Noteworthy definitions include: CILIP, the Chartered Institute of Library and Information Practitioners, defines information literacy as "the ability to think critically and make balanced judgements about any information we find and use". JISC, the Joint Information Systems Committee, refers to information literacy as one of six "digital capabilities", seen as an interconnected group of elements centered on "ICT literacy". Mozilla groups digital and other literacies as "21st century skills", a "broad set of knowledge, skills, habits and traits that are important to succeed in today's world". UNESCO, the United Nations Educational, Scientific and Cultural Organization, asserts information literacy as a "universal human right". 21st-century students The IML learning capacities prepare students to be 21st century literate. According to Jeff Wilhelm (2000), "technology has everything to do with literacy. And being able to use the latest electronic technologies has everything to do with being literate." He supports his argument with J. David Bolter's statement that "if our students are not reading and composing with various electronic technologies, then they are illiterate. They are not just unprepared for the future; they are illiterate right now, in our current time and context". Wilhelm's statement is supported by the 2005 Wired World Phase II (YCWW II) survey conducted by the Media Awareness Network of Canada on 5000 Grade 4 – 11 students. The key findings of the survey were: 62% of Grade 4 students prefer the Internet. 38% of Grade 4 students prefer the library. 91% of Grade 11 students prefer the Internet. 9% of Grade 11 students prefer the library. Marc Prensky (2001) uses the term "digital native" to describe people who have been brought up in a digital world. The Internet has been a pervasive element of young people's home lives. 94% of kids reported that they had Internet access at home, and a significant majority (61%) had a high-speed connection. By the time kids reach Grade 11, half of them (51 percent) have their own Internet-connected computer, separate and apart from the family computer. The survey also showed that young Canadians are now among the most wired in the world. Contrary to the earlier stereotype of the isolated and awkward computer nerd, today's wired kid is a social kid. In general, many students are better networked through the use of technology than most teachers and parents, who may not understand the abilities of technology. Students are no longer limited to desktop computer. They may use mobile technologies to graph mathematical problems, research a question for social studies, text message an expert for information, or send homework to a drop box. Students are accessing information by using MSN, personal Web pages, Weblogs and social networking sites. Information Literacy is taught to college students in programs such as Chiropractic to shift such fields more towards Evidence Based Practice. Courses which accomplish this are preferentially titled "Information Lit Lab", as opposed to "Information Literacy Lab" or "Information Literacy Laboratory". Teaching and learning in the 21st century Many teachers continue the tradition of teaching of the past 50 years. Traditionally teachers have been the experts sharing their knowledge with children. Technology, and the learning tools it provides access to, forces us to change to being facilitators of learning. We have to change the stereotype of teacher as the expert who delivers information, and students as consumers of information, in order to meet the needs of digital students. Teachers not only need to learn to speak digital, but also to embrace the language of digital natives. Language is generally defined as a system used to communicate in which symbols convey information. Digital natives can communicate fluently with digital devices and convey information in a way that was impossible without digital devices. People born prior to 1988 are sometimes referred to as "digital immigrants." They experience difficulty programming simple devices like a VCR. Digital immigrants do not start pushing buttons to make things work. Learning a language is best done early in a child's development. In acquiring a second language, Hyltenstam (1992) found that around the age of 6 and 7 seemed to be a cut-off point for bilinguals to achieve native-like proficiency. After that age, second language learners could get near-native-like-ness but their language would, while consisting of very few actual errors, have enough errors that would set them apart from the first language group. Although more recent research suggests this impact still exists up to 10 years of age. Kindergarten and grades 1 and 2 are critical to student success as digital natives because not all students have a "digital"-rich childhood. Students learning technological skills before Grade 3 can become equivalently bilingual. "Language-minority students who cannot read and write proficiently in English cannot participate fully in American schools, workplaces, or society. They face limited job opportunities and earning power." Speaking "digital" is as important as being literate in order to participate fully in North American society and opportunities. Students' struggle Many students are considered illiterate in media and information for various reasons. They may not see the value of media and information literacy in the 21st-century classroom. Others are not aware of the emergence of the new form of information. Educators need to introduce IML to these students to help them become media and information literate. Very little changes will be made if the educators are not supporting information and media literacy in their own classrooms. Performance standards, the foundation to support them, and tools to implement them are readily available. Success will come when there is full implementation and equitable access are established. Shared vision and goals that focus on strategic actions with measurable results are also necessary. When the staff and community, working together, identify and clarify their values, beliefs, assumptions, and perceptions about what they want children to know and be able to do, an important next step will be to discover which of these values and expectations will be achieved. Using the capacity tools to assess IML will allow students, staff and the community to reflect on how well students are meeting learning needs as related to technology. The IML Performance standards allow data collection and analysis to evidence that student-learning needs are being met. After assessing student IML, three questions can be asked: What does each student need to learn? How does one know whether students have met the capacities? How does one respond when students have difficulty in learning? Teachers can use classroom assessment for learning to identify areas that might need increased focus and support. Students can use classroom assessment to set learning goals for themselves. In the curriculum This integration of technology across the curriculum is a positive shift from computers being viewed as boxes to be learned to computers being used as technical communication tools. In addition, recent learning pedagogy recognizes the inclusion for students to be creators of knowledge through technology. International Society for Technology in Education (ISTE) has been developing a standard IML curriculum for the US and other countries by implementing the National Educational Technology Standards. United Kingdom In the UK, IML has been promoted among educators through an information literacy website developed by several organizations that have been involved in the field. United States IML is included in the Partnership for the 21st Century program sponsored by the US Department of Education. Special mandates have been provided to Arizona, Iowa, Kansas, Maine, New Jersey, Massachusetts, North Carolina, South Dakota, West Virginia and Wisconsin. Individual school districts, such as the Clarkstown Central School District, have also developed their own information literacy curriculum. ISTE has also produced the National Educational Technology Standards for Students, Teachers and Administrators. Canada In British Columbia, Canada, the Ministry of Education has de-listed the Information Technology K to 7 IRP as a stand-alone course. It is still expected that all the prescribed learning outcomes continue to be integrated. This integration of technology across the curriculum is a positive shift from computers being viewed as boxes to be learned to computers being used as technical communication tools. In addition, recent learning pedagogy recognizes the inclusion for students to be creators of knowledge through technology. Unfortunately, there has been no clear direction to implement IML. The BC Ministry of Education published the Information and Communications Technology Integration Performance Standards, Grades 5 to 10 ICTI in 2005. These standards provide performance standards expectations for Grade 5 to 10; however, they do not provide guidance for other grades, and the expectation for a Grade 5 and Grade 10 student are the same. Arab World In the Arab region, media and information literacy was largely ignored up until 2011, when the Media Studies Program at the American University of Beirut, the Open Society Foundations and the Arab-US Association for Communication Educators (AUSACE) launched a regional conference themed "New Directions: Digital and Media Literacy". The conference attracted significant attention from Arab universities and scholars, who discussed obstacles and needs to advance media literacy in the Arab region, including developing curricula in Arabic, training faculty and promoting the field. Following up on that recommendation, the Media Studies Program at AUB and the Open Society Foundations in collaboration with the Salzburg Academy on Media and Global Change launched in 2013 the first regional initiative to develop, vitalize, and advance media literacy education in the Arab region. The Media and Digital Literacy Academy of Beirut (MDLAB) offered an annual two-week summer training program in addition to working year-round to develop media literacy curricula and programs. The academy is conducted in Arabic and English and brings pioneering international instructors and professionals to teach advanced digital and media literacy concepts to young Arab academics and graduate students from various fields. MDLAB hopes that the participating Arab academics will carry what they learned to their countries and institutions and offers free curricular material in Arabic and English, including media literacy syllabi, lectures, exercises, lesson plans, and multi-media material, to assist and encourage the integration of digital and media literacy into Arab university and school curricula. In recognition of MDLAB's accomplishments in advancing media literacy education in the Arab region, the founder of MDLAB received the 2015 UNESCO-UNAOC International Media and Information Literacy Award. Prior to 2013, only two Arab universities offered media literacy courses: the American University of Beirut (AUB) and the American University of Sharjah (AUS). Three years after the launch of MDLAB, over two dozen Arab universities incorporated media literacy education into their curricula, both as stand-alone courses or as modules injected into their existing media courses. Among the universities who have full-fledged media literacy courses (as of 2015) are Lebanese American University (Lebanon), Birzeit University (Palestine), University of Balamand (Lebanon), Damascus University (Syria), Rafik Hariri University (Lebanon), Notre Dame University (Lebanon), Ahram Canadian University (Egypt), American University of Beirut (Lebanon), American University of Sharjah (UAE), and Al Azm University (Lebanon). The first Arab school to adopt media literacy as part of its strategic plan is the International College (IC) in Lebanon. Efforts to introduce media literacy to the region's other universities and schools continues with the help of other international organizations, such as UNESCO, UNAOC, AREACORE, DAAD, and OSF. Asia In Singapore and Hong Kong, information literacy or information technology was listed as a formal curriculum. Barriers One barrier to learning to read is the lack of books, while a barrier to learning IML is the lack of technology access. Highlighting the value of IML helps to identify existing barriers within school infrastructure, staff development, and support systems. While there is a continued need to work on the foundations to provide a sustainable and equitable access, the biggest obstacle is school climate. Marc Prensky identifies one barrier as teachers viewing digital devices as distractions: "Let's admit the real reason that we ban cell phones is that, given the opportunity to use them, students would vote with their attention, just as adults would 'vote with their feet' by leaving the room when a presentation is not compelling." The mindset of banning new technology, and fearing the bad things that can happen, can affect educational decisions. The decision to ban digital devices impacts students for the rest of their lives. Any tool that is used poorly or incorrectly can be unsafe. Safety lessons are mandatory in industrial technology and science. Yet safety or ethical lessons are not mandatory to use technology. Not all decisions in schools are measured by common ground beliefs. One school district in Ontario banned digital devices from their schools. Local schools have been looking at doing the same. These kinds of reactions are often about immediate actions and not about teaching, learning or creating solutions. Many barriers to IML exist. Key information literacies Information literacies are the multiple literacies individuals may need to function effectively in the global information society. The following are key information literacies. Critical literacy Critical literacy is the ability to actively analyse texts and media to identify underlying messages, taking into account context, perspective and possible biases. Computer literacy Computer literacy is the ability to use computers and other digital devices efficiently enough to carry out basic or more advanced tasks. Copyright literacy Copyright literacy is the ability to manage creative output and make appropriate use of the work of others, informed by knowledge of copyright, ownership, usage and other rights. Data literacy Data literacy is the ability to gather, interpret and analyse data, and communicate insights and information from this analysis. Increasingly important in everyday life, over 80% of employers cite data literacy as a key skill for employees. Digital literacy Digital literacy is the ability to use technology to manage and interact with digitized information, participate in online practice and originate digital work. Disaster literacy Disaster literacy is an individual's ability to read, understand, and use information to make informed decisions and follow instructions in the context of mitigating, preparing, responding, and recovering from a disaster. Financial literacy Financial literacy is the capacity of an individual to understand available banking products, services, laws and obligations, and make informed decisions on financial assets. Health literacy Health literacy is the ability of individuals to locate, understand, manage, and make appropriate use of information to help promote and maintain good health. Media literacy Media literacy is the ability to locate, critically evaluate, communicate with and make effective use of different types of media. Transliteracy Transliteracy combines capabilities in information literacy, technology, creativity, communication and collaboration, critical thinking, practical skills and craft, to cross cultures, contexts, technologies and media. Visual literacy Visual literacy is the ability to interpret and make meaning from visual information such as static or moving images, graphics, symbols, diagrams, maps. Web literacy Web literacy is the ability to navigate the world wide web, interact effectively and thrive online, while managing online presence, privacy and risk. See also Critical literacy Digital literacy Multiliteracy Numeracy Visual literacy Center for Documentation and Information Notes References August, Diane. (2006). Developing Literacy in Second-Language Learners: Report of the National Literacy Panel on Language-Minority Children and Youth. 1. Retrieved March 24, 2007 from BC Ministry of Education. (2006). Information and Communication Technology Integration. Retrieved December 1, 2006, from B.C. Performance Standards - Province of British Columbia. BC Ministry of Education. (2005). Science K to 7: Integrated Resource Package 2005. 32. Retrieved December 1, 2006, from https://web.archive.org/web/20061007172614/http://www.bced.gov.bc.ca/irp/scik7.pdf BC Ministry of Education. (1996). Information Technology K to 7: Integrated Resource Package. Retrieved December 1, 2006, from DuFour, R., Burnette, B. (2002) Pull out negativity by its roots. [electronic version] Journal of Staff Development. 23 (2), para. 23. Fedorov, A. (2008). On Media Education. Moscow: ICOS UNESCO 'Information for All'. International Society of Technology Educators. (2004). National Education Testing Standards – Students. Retrieved November 15, 2006 Lambert, L. (1998). Building Leadership Capacity. ASCD. Alexandria, Virginia 6, 23. Media Awareness Network. (2003). Young Canadians in a wired world; The Students' View. Retrieved on May 11, 2007 from Media Awareness Network. (2005a). Young Canadians in a wired world Phase II: Trends and Recommendations. Valerie Steeves. Retrieved on March 19, 2007 upload/YCWWII_trends_recomm.pdf. Media Awareness Network. (2005b). Young Canadians in a wired world phase ii. ERIN Research Inc. 6. Retrieved on March 19, 2007 Prensky, M. (2001). Digital Natives, Digital Immigrants. [electronic version] On the Horizon. 9 (5), 1. Prensky, M. (2006). Listen to the Natives. [electronic version] Educational Leadership. 63 (4) 8 -13. Surrey School District No. 36 (Surrey). (2005) Vision 2010 Strategic Plan. 1 – 4. Retrieved May 8, 2007 Surrey School District No. 36 (Surrey). (2007) Quick Facts. 1. Retrieved May 10, 2007 Wilhelm, J. (2000). Literacy by Design. Voices from the middle, A publication of the national council of teachers of English. 7 (3). 4 – 14. Retrieved May 11, 2007 Language Literacy Information science
0.763574
0.988667
0.75492
DPSIR
DPSIR (drivers, pressures, state, impact, and response model of intervention) is a causal framework used to describe the interactions between society and the environment. It seeks to analyze and assess environmental problems by bringing together various scientific disciplines, environmental managers, and stakeholders, and solve them by incorporating sustainable development. First, the indicators are categorized into "drivers" which put "pressures" in the "state" of the system, which in turn results in certain "impacts" that will lead to various "responses" to maintain or recover the system under consideration. It is followed by the organization of available data, and suggestion of procedures to collect missing data for future analysis. Since its formulation in the late 1990s, it has been widely adopted by international organizations for ecosystem-based study in various fields like biodiversity, soil erosion, and groundwater depletion and contamination. In recent times, the framework has been used in combination with other analytical methods and models, to compensate for its shortcomings. It is employed to evaluate environmental changes in ecosystems, identify the social and economic pressures on a system, predict potential challenges and improve management practices. The flexibility and general applicability of the framework make it a resilient tool that can be applied in social, economic, and institutional domains as well. History The Driver-Pressure-State-Impact-Response framework was developed by the European Environment Agency (EEA) in 1999. It was built upon several existing environmental reporting frameworks, like the Pressure-State-Response (PSR) framework developed by the Organization for Economic Co-operation and Development (OECD) in 1993, which itself was an extension of Rapport and Friend's Stress-Response (SR) framework (1979). The PSR framework simplified environmental problems and solutions into variables that stress the cause-effect relationship between human activities that exert pressure on the environment, the state of the environment, and society's response to the condition. Since it focused on anthropocentric pressures and responses, it did not effectively factor natural variability into the pressure category. This led to the development of the expanded Driving Force-State-Response (DSR) framework, by the United Nations Commission on Sustainable Development (CSD) in 1997. A primary modification was the expansion of the concept of “pressure” to include social, political, economic, demographic, and natural system pressures. However, by replacing “pressure” with “driving force”, the model failed to account for the underlying reasons for the pressure, much like its antecedent. It also did not address the motivations behind responses to changes in the state of the environment. The refined DPSIR model sought to address these shortcomings of its predecessors by addressing root causes of the human activities that impact the environment, by incorporating natural variability as a pressure on the current state and addressing responses to the impact of changes in state on human well-being. Unlike PSR and DSR, DPSIR is not a model, but a means of classifying and disseminating information related to environmental challenges. Since its conception, it has evolved into modified frameworks like Driver-Pressure-Chemical State-Ecological State-Response (DPCER), Driver-Pressure-State-Welfare-Response (DPSWR), and Driver-Pressure-State-Ecosystem-Response (DPSER). The DPSIR Framework Driver (Driving Force) Driver refers to the social, demographic, and economic developments which influence the human activities that have a direct impact on the environment. They can further be subdivided into primary and secondary driving forces. Primary driving forces refer to technological and societal actors that motivate human activities like population growth and distribution of wealth. The developments induced by these drivers give rise to secondary driving forces, which are human activities triggering “pressures” and “impacts”, like land-use changes, urban expansion and industrial developments. Drivers can also be identified as underlying or immediate, physical or socio-economic, and natural or anthropogenic, based on the scope and sector in which they are being used. Pressure Pressure represents the consequence of the driving force, which in turn affects the state of the environment. They are usually depicted as unwanted and negative, based on the concept that any change in the environment caused by human activities is damaging and degrading. Pressures can have effects on the short run (e.g.: deforestation), or the long run (e.g.: climate change), which if known with sufficient certainty, can be expressed as a probability. They can be both human-induced, like emissions, fuel extraction, and solid waste generation, and natural processes, like solar radiation and volcanic eruptions. Pressures can also be sub-categorized as endogenic managed pressures, when they stem from within the system and can be controlled (e.g.: land claim, power generation), and as exogenic unmanaged pressures, when they stem from outside the system and cannot be controlled (e.g.: climate change, geomorphic activities). State State describes the physical, chemical and biological condition of the environment or observable temporal changes in the system. It may refer to natural systems (e.g.: atmospheric CO2 concentrations, temperature), socio-economic systems (e.g.: living conditions of humans, economic situations of an industry), or a combination of both (e.g.: number of tourists, size of current population). It includes a wide range of features, like physico-chemical characteristics of ecosystems, quantity and quality of resources or “carrying capacity”, management of fragile species and ecosystems, living conditions for humans, and exposure or the effects of pressures on humans. It is not intended to just be static, but to reflect current trends as well, like increasing eutrophication and change in biodiversity. Impact Impact refers to how changes in the state of the system affect human well-being. It is often measured in terms of damages to the environment or human health, like migration, poverty, and increased vulnerability to diseases, but can also be identified and quantified without any positive or negative connotation, by simply indicating a change in the environmental parameters. Impact can be ecologic (e.g.: reduction of wetlands, biodiversity loss), socio-economic (e.g.: reduced tourism), or a combination of both. Its definition may vary depending on the discipline and methodology applied. For instance, it refers to the effect on living beings and non-living domains of ecosystems in biosciences (e.g.: modifications in the chemical composition of air or water), whereas it is associated with the effects on human systems related to changes in the environmental functions in socio-economic sciences (e.g.: physical and mental health). Response Response refers to actions taken to correct the problems of the previous stages, by adjusting the drivers, reducing the pressure on the system, bringing the system back to its initial state, and mitigating the impacts. It can be associated uniquely with policy action, or to different levels of the society, including groups and/or individuals from the private, government or non-governmental sectors. Responses are mostly designed and/or implemented as political actions of protection, mitigation, conservation, or promotion. A mix of effective top-down political action and bottom-up social awareness can also be developed as responses, such as eco-communities or improved waste recycling rates. Criticisms and Limitations Despite the adaptability of the framework, it has faced several criticisms. One of the main goals of the framework is to provide environmental managers, scientists of various disciplines, and stakeholders with a common forum and language to identify, analyze and assess environmental problems and consequences. However, several notable authors have mentioned that it lacks a well-defined set of categories, which undermines the comparability between studies, even if they are similar. For instance, climate change can be considered as a natural driver, but is primarily caused by greenhouse gases (GSG) produced by human activities, which may be categorized under “pressure”.  A wastewater treatment plant is considered a response while dealing with water pollution, but a pressure when effluent runoff leading to eutrophication is taken into account. This ambivalence of variables associated with the framework has been criticized as a lack of good communication between researchers and between stakeholders and policymakers. Another criticism is the misguiding simplicity of the framework, which ignores the complex synergy between the categories. For instance, an impact can be caused by various different state conditions and responses to other impacts, which is not addressed by DPSIR. Some authors also argue that the framework is flawed as it does not clearly illustrate the cause-effect linkage for environmental problems. The reasons behind these contextual differences seem to be differences in opinions, characteristics of specific case studies, misunderstanding of the concepts and inadequate knowledge of the system under consideration. DPSIR was initially proposed as a conceptual framework rather than a practical guidance, by global organizations. This means that at a local level, analyses using the framework can cause some significant problems. DPSIR does not encourage the examination of locally specific attributes for individual decisions, which when aggregated, could have potentially large impacts on sustainability. For instance, a farmer who chooses a particular way of livelihood may not create any consequential alterations on the system, but the aggregation of farmers making similar choices will have a measurable and tangible effect. Any efforts to evaluate sustainability without considering local knowledge could lead to misrepresentations of local situations, misunderstandings of what works in particular areas and even project failure. While there is no explicit hierarchy of authority in the DPSIR framework, the power difference between “developers” and the “developing” could be perceived as the contributor to the lack of focus on local, informal responses at the scale of drivers and pressures, thus compromising the validity of any analysis conducted using it. The “developers” refer to the Non-Governmental Organizations (NGOs), State mechanisms and other international organizations with the privilege to access various resources and power to use knowledge to change the world, and the “developing” refers to local communities. According to this criticism, the latter is less capable of responding to environmental problems than the former. This undermines valuable indigenous knowledge about various components of the framework in a particular region, since the inclusion of the knowledge is almost exclusively left at the discretion of the “developers”. Another limitation of the framework is the exclusion of social and economic developments on the environment, particularly for future scenarios. Furthermore, DPSIR does not explicitly prioritize responses and fails to determine the effectiveness of each response individually, when working with complex systems. This has been one of the most criticized drawbacks of the framework, since it fails to capture the dynamic nature of real-world problems, which cannot be expressed by simple causal relations. Applications Despite its criticisms, DPSIR continues to be widely used to frame and assess environmental problems to identify appropriate responses. Its main objective is to support sustainable management of natural resources. DPSIR structures indicators related to the environmental problem addressed with reference to the political objectives and focuses on supposed causal relationships effectively, such that it appeals to policy actors. Some examples include the assessment of the pressure of alien species, evaluation of impacts of developmental activities on the coastal environment and society, identification of economic elements affecting global wildfire activities, and cost-benefit analysis (CBA) and gross domestic product (GDP) correction.   To compensate for its shortcomings, DPSIR is also used in conjunction with several analytical methods and models. It has been used in conjunction with Multiple-Criteria Decision Making (MCDM) for desertification risk management, with Analytic Hierarchy Process (AHP) to study urban green electricity power, and with Tobit model to assess freshwater ecosystems. The framework itself has also been modified to assess specific systems, like DPSWR, which focuses on the impacts on human welfare alone, by shifting ecological impact to the state category. Another approach is a differential DPSIR (ΔDPSIR), which evaluates the changes in drivers, pressures and state after implementing a management response, making it valuable both as a scientific output and a system management tool. The flexibility offered by the framework makes it an effective tool with numerous applications, provided the system is properly studied and understood by the stakeholders. References External links DPSIR-Model of the European Environment Agency (EEA) Environmental terminology Industrial ecology
0.770593
0.979639
0.754902
Dalton Plan
The Dalton Plan is an educational concept created by Helen Parkhurst. It is inspired by the intellectual ferment at the turn of the 20th century. Educational thinkers such as Maria Montessori and John Dewey influenced Parkhurst while she created the Dalton Plan. Their aim was to achieve a balance between a child's talent and the needs of the community. Characteristics Parkhurst's specific objectives were as follows: To tailor each student's program to his or her needs, interests and abilities. To promote each student's independence and dependability. To enhance the student's social skills. To increase their sense of responsibility toward others. Influenced at least in part by the teachings of Judo after conversations with the founder of Kodokan Judo, Dr Jigoro Kano. Ref page 72 and 86 ISBN 978-1-56836-479-1 She developed a three-part plan that continues to be the structural foundation of a Dalton education: The House, a social community of students. The Assignment, a monthly goal which students contract to complete. The Laboratory, a subject-based classroom intended to be the center of the educational experience. The laboratory involves students from fourth grade through the end of secondary education. Students move between subject "laboratories" (classrooms) and explore themes at their own pace. Introduction in UK In 1920, an article describing the working of the Dalton Plan in detail was published in the Times' Educational Supplement. Parkhurst "has given to the secondary school the leisure and culture of the University student; she has uncongested the curriculum; she has abolished the teacher's nightly preparation of classes and the child's nightmare of homework. At the same time the children under her regime cover automatically all the ground prescribed for examinations 'of matriculation standard,' and examination failures among them are nil." The Dalton Plan is a method of education by which pupils work at their own pace, and receive individual help from the teacher when necessary. There is no formal class instruction. Students draw up time-tables and are responsible for finishing the work on their syllabuses or assignments. Students are also encouraged to help each other with their work. The underlying aim of the Dalton Plan is to achieve the highest mental, moral, physical and spiritual development of the pupil. In the spring of 1921, English headmistress Rosa Bassett went to the Children's University School and stayed with Parkhurst. They spent hours talking about education. Parkhurst found Bassett in complete agreement with her ideas: "She was Dalton," Parkhurst wrote 50 years later. She described Bassett and Belle Rennie as the two people in England who were most enthusiastic and most helpful about the introduction of the Dalton Plan. Rosa Bassett was instrumental in the first application of the Dalton Plan of teaching within an English secondary school. She contributed a chapter to Parkhurst's book on the Plan, Schools List of schools Australia Ascham School, Sydney, 1922 Methodist Ladies' College, Perth, 1921 Austria Europaschule, Wien HTBL Lastenstraße, Klagenfurt Internationale Daltonschule mit IT-Schwerpunkt Wels de La Tour Schule Deutschlandsberg Belgium Basisschool De Kleine Icarus, Gent Basisschool De Lotus, Gent Basisschool Dalton 1 Hasselt Basisschool Dalton 2 Hasselt Middelbare Dalton school VanVeldeke Hasselt Het Leerlabo, kleuter-, lager en secundair daltononderwijs, Westerlo Dalton Middenschool Lyceum, Gent China Shanghai East Century School, Shanghai Little Dalton Kindergarten, Hong Kong Dalton School Hong Kong, Hong Kong Wenzhou Dalton Elementary School, Wenzhou Czech Republic ZŠ a MŠ Chalabalova, Brno ZŠ a MŠ Husova, Brno ZŠ a MŠ Křídlovick, Brno ZŠ a MŠ Mutĕnická, Brno ZŠ Rájec-Jestřebí Gymnázium Slovanské námĕstí, Brno ZŠ Benešova Třebíč Základní škola, Brno Základní škola Brno, Brno Germany Angell Akademie, Freiburg Gymnasium Alsdorf, Alsdorf Grundschule Unstruttal, Ammern, near Mühlhausen Marie-Kahle-Gesamtschule Bonn, Bonn Albrecht-Dürer-Gymnasium Berlin, Berlin Theodor-Heuss-Gymnasium, Dinslaken Schillerschule, Erfurt Gymnasium Essen-Überruhr, Essen Internationale Gesamtschule Heidelberg, Heidelberg Gymnasium Lage, Lage Gymnasium Vegesack, Bremen India Global School, Rahuri. MH Japan In Japan, Admiral Osami Nagano introduced a progressive educational method such as the Dalton plan to the Japanese Naval Academy School and influenced it. Dalton Tokyo, Tokyo Dalton Nagoya, Nagoya Korea Cheongna Dalton School, Cheongna Netherlands Basisschool de Bakelgeert, Boxmeer Brederode Daltonschool, Santpoort Zuid Casimirschool, Gouda Dalton basisschool de Twijn, Utrecht Dalton basisschool Rijnsweerd, Utrecht Dalton Den Haag, The Hague (Den Haag) Dalton mavo, Naaldwijk Het Tangram, Rotterdam Daltonexpertisecentrum, Instituut Theo Thijssen, Hogeschool, Utrecht Daltonschool De Klipper, Berkel en Rodenrijs Daltonschool Hengelo Zuid, Hengelo Dalton Lyceum Barendrecht, Barendrecht De Achtbaan, Amersfoort De Klinker, Schiedam De Poolster, Amsterdam De Zevenster, Enschede 2de Daltonschool, Amsterdam 3de Daltonschool, Amsterdam Erasmus College, Zoetermeer Het Cheider, Amsterdam Helen Parkhurst College, Almere Hogeland College, Dalton vmbo, Warffum Kardinaal Alfrinkschool (voor Daltononderwijs), Wageningen Katholieke Daltonschool De Leeuwerik, Leiderdorp Koningin Wilhelmina School Overveen Markenhage, Breda Maurick College. Vught Saxion Hogeschool, Deventer Schooladviescentrum, Utrecht Stedelijk Daltoncollege, Zutphen Stedelijk Dalton College, Alkmaar Stedelijk Dalton Lyceum, Dordrecht Spinoza Lyceum, Amsterdam Spinoza 20first, Amsterdam obs Theo Thijssen, Assen obs Kloosterveen, Assen Tweemaster-Kameleon, Oost-Souburg De Vijfster, Capelle aan den IJssel Wenke Dalton Consultancy, Meppel Dalton College, Voorburg De Waterval, Ermelo Jeanne d'Arc, 't Harde De Juliana Daltonschool, Bussum Wolfert Dalton, Rotterdam Daltonschool De Margriet, Rotterdam Wolfert Lyceum, Bergschenhoek Daltonschool Klaverweide, Noordwijk Daltonschool Maarssen, Maarssen obs Het Klokhuis, Duiven Ronerborg, Roden KBS Eloy, Ugchelen Chr. Daltonschool Koningin Emma, Zwolle Haarlemmermeer lyceum Zuidrand, Hoofddorp De Tweemaster, Hoorn Basisschool De Ley, Leiden Poland Academy International, Warsaw Russia Dalton School 1080, Moscow United Kingdom Bedales School, Hampshire Bryanston School, Blandford, Dorset Millington Primary School, Portadown St Trinnean's School, Edinburgh (which inspired the fictional St Trinian's) York Way Girls' School in King's Cross United States Dalton School, New York City See also Dalton International J. G. Jeffreys, who introduced the Plan at Bryanston School, in England. References External links Dalton School homepage Open Source adoption Alternative education Education in the United States Education in the Netherlands Education in the Czech Republic School types
0.768743
0.981986
0.754894
Summative assessment
Summative assessment, summative evaluation, or assessment of learning is the assessment of participants in an educational program. Summative assessments are designed both to assess the effectiveness of the program and the learning of the participants. This contrasts with formative assessment which summarizes the participants' development at a particular time to inform instructors of student learning progress. The goal of summative assessment is to evaluate student learning at the end of an instructional unit by comparing it against a standard or benchmark. Summative assessments may be distributed throughout a course or often after a particular unit (or collection of topics) . Summative assessment usually involves students receiving a grade that indicates their level of performance. Grading systems can include a percentage, pass/fail, or some other form of scale grade. Summative assessments are weighed more than formative assessments. Summative assessments are often high stakes, which means that they have a high point value. Examples of summative assessments include: a midterm exam, a final project, a paper, a senior recital, or another format. Instructional design Summative assessment is used as an evaluation technique in instructional design, It can provide information on the efficacy of an educational unit of study. Summative evaluation judges the worth or value of an educational unit of study at its conclusion. Summative assessments also serve the purpose of evaluating student learning. In schools, these assessments varies: traditional written tests, essays, presentations, discussions, or reports using other formats. There are several factors which designers of summative assessments must take into consideration. Firstly, a summative assessment must have validity I.e., it must evaluate the standards or learning objectives that were taught over the course of the unit. Secondly, a summative assessment must be reliable: the results of the assessment should be consistent. In other words, the assessment should be designed to be as objective as possible, though this can be challenging in certain disciplines. Summative assessments are usually given at the end of a unit and they are usually high stakes with the grade being weighed more heavily than formative assessments taken during the unit. Many educators and school administrators use data from summative assessments to help identify learning gaps. This information can come from both summative assessments taken in the classroom or from district-wide, school-wide or statewide standardized tests. Once educators and administrators have student summative assessment data, many districts place students into educational interventions or enrichment programs. Intervention programs are designed to teach students skills in which they are not yet proficient in order to help them make progress and lessen learning gaps while enrichment programs are designed to challenge students who have mastered many skills and have high summative assessment scores. Educator performance Summative assessment can be used to refer to assessment of educational faculty by their respective supervisor with the aim of measuring all teachers on the same criteria to determine the level of their performance. In this context, summative assessment is meant to meet the school or district's needs for teachers' accountability. The evaluation usually takes the shape of a form and consists of check lists and occasionally narratives. Areas evaluated include classroom climate, instruction, professionalism, planning and preparation. Methods Methods of summative assessment aim to summarize overall learning at the completion of the course or unit. Questionnaires Surveys Interviews Observations Testing (specific test created by the teacher or establishment made to include all points of a unit or specific information taught in a given time frame) Projects (a culminating project that synthesizes knowledge) See also Examination Educational assessment Formative assessment Computer-aided assessment Types of assessment References Educational evaluation methods Educational psychology School terminology
0.760413
0.992728
0.754884
Phenomenon-based learning
Phenomenon-based learning is a constructivist form of learning or pedagogy, where students study a topic or concept in a holistic approach instead of in a subject-based approach. Phenomenon-based learning includes both topical learning (also known as topic-based learning or instruction), where the phenomenon studied is a specific topic, event, or fact, and thematic learning (also known as theme-based learning or instruction), where the phenomenon studied is a concept or idea. Phenomenon-based learning emerged as a response to the idea that traditional, subject-based learning is outdated and removed from the real-world and does not offer the optimum approach to development of 21st century skills. It has been used in a wide variety of higher educational institutions and more recently in grade schools. Features PhBL forges connections across content and subject areas within the limits of the particular focus. It can be a used as part of teacher-centered passive learning although in practice it is used more in student-centered active learning environments, including inquiry-based learning, problem-based learning, or project-based learning. An example of topical learning might be studying a phenomenon or topic (such as a geographical feature, historical event, or notable person) instead of isolated subjects (such as geography, history, or literature). In the traditional subject-based approach of most Western learning environments, the learner would spend a set amount of time studying each subject; with topical learning, the trend is to spend a greater amount of time focused on the broader topic. During this topical study, specific knowledge or information from the individual subjects would normally be introduced in a relevant context instead of in isolation or the abstract. Topical learning is most frequently applied as a learner-centered approach, where the student, not the teacher, selects the topic or phenomenon to be studied. This is thought to be more successful at engaging students and providing deeper learning as it will be more likely to align with their own interests and goals. This aspect has also been recognized as facilitating the integration of education as well as a method to enable students to obtain core knowledge and skills across a range of subjects, it has been considered effective in promoting enthusiasm and greater organization, communication, and evaluation. Similar to project-based learning, it also provides opportunities to explore a topic or concept in detail. With deeper knowledge students develop their own ideas, awareness, and emotions about the topic. While not absolute, PhBL has several main features: Inquiry-based The PhBL approach supports learning in accordance with inquiry learning, problem-based learning, and project and portfolio learning in formal educational as well as in the workplace. It begins with studying and developing an understanding of the phenomenon through inquiry. A problem-based learning approach can then be used to discover answers and develop conclusions about the topic. Anchored in the real world The phenomenon-based approach is a form of anchored learning, although it is not necessarily linked to technology. The questions asked and items studied are anchored in real-world phenomena, and the skills that are developed and information learned can be applied across disciplines and beyond the learning environments in real-world situations. Real-world phenomena can also be based on fictional narratives, for example a story, book or fictional character, but these are elements drawn from the real world. Contextual PhBL provides a process where new information is applied to the phenomenon or problem. This context demonstrates to the learner immediate utility value of the concepts and information being studied. Application and use of this information during the learning situation is very important for retention. Information that is absorbed only through listening or reading, or in the abstract (such as formulas and theories) without clear and obvious application to the learning at hand, or to real-world application, often remain in short-term memory and are not internalized. Authenticity PhBL can demonstrate the authenticity of learning, a key requirement for deeper learning. In a PhBL environment, cognitive processes correspond to those in the actual/real-world situations where the learned subject matter or skills are used. Manowaluilou et al. (2022) found that Project-Based Learning (PBL) can improve children's learning outcomes when authentic, real-world case-based phenomena are employed. This method promotes greater engagement and a deeper understanding of concepts among students. The intent is to bring genuine practices and processes into learning situations to allow participation in the "expert culture" of the area and practices being studied. Constructivism PhBL is a constructivist form of learning, in which learners are seen as active knowledge builders and information is seen as being constructed as a result of problem-solving. Information and skills are constructed out of ‘little pieces’ into a whole relevant to the situation at the time. When phenomenon based learning occurs in a collaborative setting (the learners work in teams, for example), it supports the socio-constructivist and sociocultural learning theories, in which information is not seen only as an internal element of an individual; instead, information is seen as being formed in a social context. Central issues in the sociocultural learning theories include cultural artifacts (e.g. systems of symbols such as language, mathematical calculation rules and different kinds of thinking tools) – not every learner needs to reinvent the wheel, they can use the information and tools transmitted by cultures. Topical learning Topical learning (TL) has been used for decades to study a specific topic such as a geographical feature, historical event, legal case, medical condition, or notable person, each of which may cover more than one academic subject such as geography, history, law, or medicine. TL forges connections across content areas within the limits of the particular topic. As a cross-disciplinary application, it has been used as a means of assisting foreign language learners to use the topic as a means to learn the foreign language. There are several benefits of topic-based learning. When students focus on learning a topic, the specific subject, such as a foreign language, becomes an important tool or medium to understand the topic, thus providing a meaningful way for learners to use and learn the subject (or language). Thematic learning Thematic learning is used to study a macro theme, such as a broad concept or large and integrated system (political system, ecosystem, growth, etc.). In the United States, it is used to study concepts identified in the Core Curriculum Content Standards. As with topical learning, it forges connections across content areas within the limits of the particular topic. Proponents state that by studying the broad concepts that connect what would otherwise be isolated subject areas, learners can develop skills and insights for future learning and the workplace. Finland Commencing in the 2016–2017 academic year, Finland will begin implementing educational reform that will mandate that topical learning (phenomenon-based learning) be introduced alongside traditional subject-based instruction. As part of a new National Curriculum Framework, it will apply to all basic schools for students aged 7–16 years old. Finnish schools have used PhBL for several decades, but it was not previously mandatory. It is anticipated that educators around the world will be studying this development as Finland's educational system is considered to be a model of success by many. This shift coincides with other changes that are encouraging development of 21st century skills such as collaboration, communication, creativity, and critical thinking. References Further reading External links Phenomenal Education How is Finland building schools of the future?, Enterprise Innovation Next Generation Science Standards – Using Phenomena in NGSS-Designed Lessons and Units FAO – Agroecology Knowledge Hub Pedagogy Learning methods Learning programs Learning programs in Europe
0.789236
0.95644
0.754857
Thomas Gordon (psychologist)
Thomas Gordon (March 11, 1918 – August 26, 2002) was an American clinical psychologist and colleague of Carl Rogers. He is widely recognized as a pioneer in teaching communication skills and conflict resolution methods to parents, teachers, leaders, women, youth and salespeople. The model he developed came to be known as the Gordon Model or the Gordon Method, a complete and integrated system for building and maintaining effective relationships. Work Gordon strongly believed that the use of coercive power damages relationships. As an alternative, he taught people skills for communicating and resolving conflicts that they can use to establish or improve good relationships at home, school and at work. These skills include active listening, I-messages and No-Lose Conflict Resolution. He first applied some of these methods in the 1950s as a consultant to business organizations. Then in 1962, he introduced Parent Effectiveness Training (P.E.T.), a course recognized as the first skill-based training program for parents. He taught the first class to a group of 14 parents in a Pasadena, California cafeteria. He then began training instructors throughout the U.S. to teach it in their communities. Over the next several years, the course spread to all 50 states. In 1970, Gordon wrote the Parent Effectiveness Training (P.E.T.) book to extend the reach of this new parenting philosophy. To date, the P.E.T. book (revised in 2000) has been published in 33 languages and sold over five million copies. Over a million people have participated in the course in 45 countries around the world. As P.E.T. became known in the educational world, many schools wanted their teachers to learn the same skills so he developed the Teacher Effectiveness Training (T.E.T.) course and in 1974, wrote the Teacher Effectiveness Training (T.E.T.) book (with Noel Burch). The T.E.T. course has been offered around the world as a model that eliminates authoritarian teaching and punitive discipline in the classroom. Although it was a new idea in the 1950s, Gordon's Leader Effectiveness Training (L.E.T.) program became more popular in the 1970s with the increasing acceptance of participative management in the U.S. This course has been taught in hundreds of companies, including many of the Fortune 500. He is recognized as a pioneer in developing a model of democratic and collaborative leadership and identifying the effective communication skills it requires. Both the American Psychological Foundation and the California Psychological Association presented him with lifetime achievement awards. Gordon Training International in Solana Beach, California, the company he founded in 1974, continues his work. His widow, Linda Adams, is the current president and CEO of Gordon Training International. Gordon Method The method emphasizes effective communication and conflict resolution using the win-win strategy. Other skills from his program are active listening and the use of I-messages. Besides P.E.T., Dr. Gordon and his organization has introduced Gordon workshops for leaders (L.E.T.), adults (Be Your Best), youth (Y.E.T and Resolving Conflicts at School), teachers (T.E.T.), salespeople (Synergistic Selling). Select bibliography Children Don't Misbehave by Thomas Gordon, Ph.D. The Power of the Language of Acceptance by Thomas Gordon, Ph.D. How Children Really React to Control by Thomas Gordon, Ph.D. References About Thomas Gordon Origins of the Gordon Model Gordon, Thomas. (2000). 1st rev. pbk. ed edition Three Rivers Press. External links Gordon Training International 1918 births 2002 deaths 20th-century American psychologists University of Chicago faculty American humanists American family and parenting writers People from Solana Beach, California
0.77834
0.969805
0.754838
Economic model
An economic model is a theoretical construct representing economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified, often mathematical, framework designed to illustrate complex processes. Frequently, economic models posit structural parameters. A model may have various exogenous variables, and those variables may change to create various responses by economic variables. Methodological uses of models include investigation, theorizing, and fitting theories to the world. Overview In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study. Simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful. Selection is important because the nature of an economic model will often determine what facts will be looked at and how they will be compiled. For example, inflation is a general economic concept, but to measure inflation requires a model of behavior, so that an economist can differentiate between changes in relative prices and changes in price that are to be attributed to inflation. In addition to their professional academic interest, uses of models include: Forecasting economic activity in a way in which conclusions are logically related to assumptions; Proposing economic policy to modify future economic activity; Presenting reasoned arguments to politically justify economic policy at the national level, to explain and influence company strategy at the level of the firm, or to provide intelligent advice for household economic decisions at the level of households. Planning and allocation, in the case of centrally planned economies, and on a smaller scale in logistics and management of businesses. In finance, predictive models have been used since the 1980s for trading (investment and speculation). For example, emerging market bonds were often traded based on economic models predicting the growth of the developing nation issuing them. Since the 1990s many long-term risk management models have incorporated economic relationships between simulated variables in an attempt to detect high-exposure future scenarios (often through a Monte Carlo method). A model establishes an argumentative framework for applying logic and mathematics that can be independently discussed and tested and that can be applied in various instances. Policies and arguments that rely on economic models have a clear basis for soundness, namely the validity of the supporting model. Economic models in current use do not pretend to be theories of everything economic; any such pretensions would immediately be thwarted by computational infeasibility and the incompleteness or lack of theories for various types of economic behavior. Therefore, conclusions drawn from models will be approximate representations of economic facts. However, properly constructed models can remove extraneous information and isolate useful approximations of key relationships. In this way more can be understood about the relationships in question than by trying to understand the entire economic process. The details of model construction vary with type of model and its application, but a generic process can be identified. Generally, any modelling process has two steps: generating a model, then checking the model for accuracy (sometimes called diagnostics). The diagnostic step is important because a model is only useful to the extent that it accurately mirrors the relationships that it purports to describe. Creating and diagnosing a model is frequently an iterative process in which the model is modified (and hopefully improved) with each iteration of diagnosis and respecification. Once a satisfactory model is found, it should be double checked by applying it to a different data set. Types of models According to whether all the model variables are deterministic, economic models can be classified as stochastic or non-stochastic models; according to whether all the variables are quantitative, economic models are classified as discrete or continuous choice model; according to the model's intended purpose/function, it can be classified as quantitative or qualitative; according to the model's ambit, it can be classified as a general equilibrium model, a partial equilibrium model, or even a non-equilibrium model; according to the economic agent's characteristics, models can be classified as rational agent models, representative agent models etc. Stochastic models are formulated using stochastic processes. They model economically observable values over time. Most of econometrics is based on statistics to formulate and test hypotheses about these processes or estimate parameters for them. A widely used bargaining class of simple econometric models popularized by Tinbergen and later Wold are autoregressive models, in which the stochastic process satisfies some relation between current and past values. Examples of these are autoregressive moving average models and related ones such as autoregressive conditional heteroskedasticity (ARCH) and GARCH models for the modelling of heteroskedasticity. Non-stochastic models may be purely qualitative (for example, relating to social choice theory) or quantitative (involving rationalization of financial variables, for example with hyperbolic coordinates, and/or specific forms of functional relationships between variables). In some cases economic predictions in a coincidence of a model merely assert the direction of movement of economic variables, and so the functional relationships are used only stoical in a qualitative sense: for example, if the price of an item increases, then the demand for that item will decrease. For such models, economists often use two-dimensional graphs instead of functions. Qualitative models – although almost all economic models involve some form of mathematical or quantitative analysis, qualitative models are occasionally used. One example is qualitative scenario planning in which possible future events are played out. Another example is non-numerical decision tree analysis. Qualitative models often suffer from lack of precision. At a more practical level, quantitative modelling is applied to many areas of economics and several methodologies have evolved more or less independently of each other. As a result, no overall model taxonomy is naturally available. We can nonetheless provide a few examples that illustrate some particularly relevant points of model construction. An accounting model is one based on the premise that for every credit there is a debit. More symbolically, an accounting model expresses some principle of conservation in the form algebraic sum of inflows = sinks − sources This principle is certainly true for money and it is the basis for national income accounting. Accounting models are true by convention, that is any experimental failure to confirm them, would be attributed to fraud, arithmetic error or an extraneous injection (or destruction) of cash, which we would interpret as showing the experiment was conducted improperly. Optimality and constrained optimization models – Other examples of quantitative models are based on principles such as profit or utility maximization. An example of such a model is given by the comparative statics of taxation on the profit-maximizing firm. The profit of a firm is given by where is the price that a product commands in the market if it is supplied at the rate , is the revenue obtained from selling the product, is the cost of bringing the product to market at the rate , and is the tax that the firm must pay per unit of the product sold. The profit maximization assumption states that a firm will produce at the output rate x if that rate maximizes the firm's profit. Using differential calculus we can obtain conditions on x under which this holds. The first order maximization condition for x is Regarding x as an implicitly defined function of t by this equation (see implicit function theorem), one concludes that the derivative of x with respect to t has the same sign as which is negative if the second order conditions for a local maximum are satisfied. Thus the profit maximization model predicts something about the effect of taxation on output, namely that output decreases with increased taxation. If the predictions of the model fail, we conclude that the profit maximization hypothesis was false; this should lead to alternate theories of the firm, for example based on bounded rationality. Borrowing a notion apparently first used in economics by Paul Samuelson, this model of taxation and the predicted dependency of output on the tax rate, illustrates an operationally meaningful theorem; that is one requiring some economically meaningful assumption that is falsifiable under certain conditions. Aggregate models. Macroeconomics needs to deal with aggregate quantities such as output, the price level, the interest rate and so on. Now real output is actually a vector of goods and services, such as cars, passenger airplanes, computers, food items, secretarial services, home repair services etc. Similarly price is the vector of individual prices of goods and services. Models in which the vector nature of the quantities is maintained are used in practice, for example Leontief input–output models are of this kind. However, for the most part, these models are computationally much harder to deal with and harder to use as tools for qualitative analysis. For this reason, macroeconomic models usually lump together different variables into a single quantity such as output or price. Moreover, quantitative relationships between these aggregate variables are often parts of important macroeconomic theories. This process of aggregation and functional dependency between various aggregates usually is interpreted statistically and validated by econometrics. For instance, one ingredient of the Keynesian model is a functional relationship between consumption and national income: C = C(Y). This relationship plays an important role in Keynesian analysis. Problems with economic models Most economic models rest on a number of assumptions that are not entirely realistic. For example, agents are often assumed to have perfect information, and markets are often assumed to clear without friction. Or, the model may omit issues that are important to the question being considered, such as externalities. Any analysis of the results of an economic model must therefore consider the extent to which these results may be compromised by inaccuracies in these assumptions, and a large literature has grown up discussing problems with economic models, or at least asserting that their results are unreliable. History One of the major problems addressed by economic models has been understanding economic growth. An early attempt to provide a technique to approach this came from the French physiocratic school in the eighteenth century. Among these economists, François Quesnay was known particularly for his development and use of tables he called Tableaux économiques. These tables have in fact been interpreted in more modern terminology as a Leontiev model, see the Phillips reference below. All through the 18th century (that is, well before the founding of modern political economy, conventionally marked by Adam Smith's 1776 Wealth of Nations), simple probabilistic models were used to understand the economics of insurance. This was a natural extrapolation of the theory of gambling, and played an important role both in the development of probability theory itself and in the development of actuarial science. Many of the giants of 18th century mathematics contributed to this field. Around 1730, De Moivre addressed some of these problems in the 3rd edition of The Doctrine of Chances. Even earlier (1709), Nicolas Bernoulli studies problems related to savings and interest in the Ars Conjectandi. In 1730, Daniel Bernoulli studied "moral probability" in his book Mensura Sortis, where he introduced what would today be called "logarithmic utility of money" and applied it to gambling and insurance problems, including a solution of the paradoxical Saint Petersburg problem. All of these developments were summarized by Laplace in his Analytical Theory of Probabilities (1812). Thus, by the time David Ricardo came along he had a well-established mathematical basis to draw from. Tests of macroeconomic predictions In the late 1980s, the Brookings Institution compared 12 leading macroeconomic models available at the time. They compared the models' predictions for how the economy would respond to specific economic shocks (allowing the models to control for all the variability in the real world; this was a test of model vs. model, not a test against the actual outcome). Although the models simplified the world and started from a stable, known common parameters the various models gave significantly different answers. For instance, in calculating the impact of a monetary loosening on output some models estimated a 3% change in GDP after one year, and one gave almost no change, with the rest spread between. Partly as a result of such experiments, modern central bankers no longer have as much confidence that it is possible to 'fine-tune' the economy as they had in the 1960s and early 1970s. Modern policy makers tend to use a less activist approach, explicitly because they lack confidence that their models will actually predict where the economy is going, or the effect of any shock upon it. The new, more humble, approach sees danger in dramatic policy changes based on model predictions, because of several practical and theoretical limitations in current macroeconomic models; in addition to the theoretical pitfalls, (listed above) some problems specific to aggregate modelling are: Limitations in model construction caused by difficulties in understanding the underlying mechanisms of the real economy. (Hence the profusion of separate models.) The law of unintended consequences, on elements of the real economy not yet included in the model. The time lag in both receiving data and the reaction of economic variables to policy makers attempts to 'steer' them (mostly through monetary policy) in the direction that central bankers want them to move. Milton Friedman has vigorously argued that these lags are so long and unpredictably variable that effective management of the macroeconomy is impossible. The difficulty in correctly specifying all of the parameters (through econometric measurements) even if the structural model and data were perfect. The fact that all the model's relationships and coefficients are stochastic, so that the error term becomes very large quickly, and the available snapshot of the input parameters is already out of date. Modern economic models incorporate the reaction of the public and market to the policy maker's actions (through game theory), and this feedback is included in modern models (following the rational expectations revolution and Robert Lucas, Jr.'s Lucas critique of non-microfounded models). If the response to the decision maker's actions (and their credibility) must be included in the model then it becomes much harder to influence some of the variables simulated. Comparison with models in other sciences Complex systems specialist and mathematician David Orrell wrote on this issue in his book Apollo's Arrow and explained that the weather, human health and economics use similar methods of prediction (mathematical models). Their systems—the atmosphere, the human body and the economy—also have similar levels of complexity. He found that forecasts fail because the models suffer from two problems: (i) they cannot capture the full detail of the underlying system, so rely on approximate equations; (ii) they are sensitive to small changes in the exact form of these equations. This is because complex systems like the economy or the climate consist of a delicate balance of opposing forces, so a slight imbalance in their representation has big effects. Thus, predictions of things like economic recessions are still highly inaccurate, despite the use of enormous models running on fast computers. See . Effects of deterministic chaos on economic models Economic and meteorological simulations may share a fundamental limit to their predictive powers: chaos. Although the modern mathematical work on chaotic systems began in the 1970s the danger of chaos had been identified and defined in Econometrica as early as 1958: "Good theorising consists to a large extent in avoiding assumptions ... [with the property that] a small change in what is posited will seriously affect the conclusions." (William Baumol, Econometrica, 26 see: Economics on the Edge of Chaos). It is straightforward to design economic models susceptible to butterfly effects of initial-condition sensitivity. However, the econometric research program to identify which variables are chaotic (if any) has largely concluded that aggregate macroeconomic variables probably do not behave chaotically. This would mean that refinements to the models could ultimately produce reliable long-term forecasts. However, the validity of this conclusion has generated two challenges: In 2004 Philip Mirowski challenged this view and those who hold it, saying that chaos in economics is suffering from a biased "crusade" against it by neo-classical economics in order to preserve their mathematical models. The variables in finance may well be subject to chaos. Also in 2004, the University of Canterbury study Economics on the Edge of Chaos concludes that after noise is removed from S&P 500 returns, evidence of deterministic chaos is found. More recently, chaos (or the butterfly effect) has been identified as less significant than previously thought to explain prediction errors. Rather, the predictive power of economics and meteorology would mostly be limited by the models themselves and the nature of their underlying systems (see Comparison with models in other sciences above). Critique of hubris in planning A key strand of free market economic thinking is that the market's invisible hand guides an economy to prosperity more efficiently than central planning using an economic model. One reason, emphasized by Friedrich Hayek, is the claim that many of the true forces shaping the economy can never be captured in a single plan. This is an argument that cannot be made through a conventional (mathematical) economic model because it says that there are critical systemic-elements that will always be omitted from any top-down analysis of the economy. Examples of economic models Cobb–Douglas model of production Solow–Swan model of economic growth Lucas islands model of money supply Heckscher–Ohlin model of international trade Black–Scholes model of option pricing AD–AS model a macroeconomic model of aggregate demand– and supply IS–LM model the relationship between interest rates and assets markets Ramsey–Cass–Koopmans model of economic growth Gordon–Loeb model for cyber security investments See also Economic methodology Computational economics Agent-based computational economics Endogeneity Financial model Notes References . . . Defines model by analogy with maps, an idea borrowed from Baumol and Blinder. Discusses deduction within models, and logical derivation of one model from another. Chapter 9 compares the neoclassical school and the Austrian School, in particular in relation to falsifiability. . One of the earliest studies on methodology of economics, analysing the postulate of rationality. . A series of essays and papers analysing questions about how (and whether) models and theories in economics are empirically verified and the current status of positivism in economics. . A thorough discussion of many quantitative models used in modern economic theory. Also a careful discussion of aggregation. . . . . This is a classic book carefully discussing comparative statics in microeconomics, though some dynamics is studied as well as some macroeconomic theory. This should not be confused with Samuelson's popular textbook. . . . . External links R. Frigg and S. Hartmann, Models in Science. Entry in the Stanford Encyclopedia of Philosophy. H. Varian How to build a model in your spare time The author makes several unexpected suggestions: Look for a model in the real world, not in journals. Look at the literature later, not sooner. Elmer G. Wiens: Classical & Keynesian AD-AS Model – An on-line, interactive model of the Canadian Economy. IFs Economic Sub-Model : Online Global Model Economic attractor Conceptual modelling
0.76072
0.992177
0.75477
Proximity principle
Within the realm of social psychology, the proximity principle accounts for the tendency for individuals to form interpersonal relations with those who are close by. Theodore Newcomb first documented this effect through his study of the acquaintance process, which demonstrated how people who interact and live close to each other will be more likely to develop a relationship. Leon Festinger also illustrates the proximity principle and propinquity (the state of being close to someone or something) by studying the network of attraction within a series of residential housing units at Massachusetts Institute of Technology (MIT). Both of these studies provide evidence to support the fact that people who encounter each other more frequently tend to develop stronger relationships. There are two main reasons why people form groups with others nearby rather than people further away. First, human beings like things that are familiar to them. Second, the more people come into contact with one another, the more likely the interaction will cultivate a relationship. Also, proximity promotes interaction between individuals and groups, which ends up leading to liking and disliking between the groups or individuals. The aforementioned idea is accurate only insofar as the increased contact does not unveil detestable traits in either person. If detestable traits are unveiled, familiarity will in fact breed contempt. It could be that interaction, rather than propinquity, that creates attraction. Applied science This concept of proximity is applicable to everyday life since it has some influence over the people one meets and befriends within one's life, as outlined in the aforementioned studies. The formation of friendships was further studied utilizing the population of 336 adolescents within a small and geographically isolated Swedish town. Through the completion of their study, the researchers concluded that social foci that provide constant and continual interaction among the same participants yielded a strong effect on friendship formation. The most notable social foci included attending the same school or one's own neighborhood, as these are both settings in which one is within a close proximity to the same people on numerous occasions. In contrast, going to separate schools does not provide one the opportunity to meet the students of that school and therefore one would not be able to formulate a friendship with that person. However, this instance is mitigated if two people from differing schools live in the same neighborhood and therefore are still provided the opportunity of continuous contact outside of school. Proximity in the Digital Age With the increasing use of technological-based communication, it is important to reflect on the impact this may have on the proximity principle. This form of computer-based communication allows people to interact with others disregarding the constraints of physical distance, however it is “reported that a majority of social network site postings they sampled occurred between people living in the same state, if not the same city”. Furthermore, it appears that computer-based communication increases the ability for people to communicate, but is often only utilized between those who already know each other through pre-existing circumstances. Although this article is focused on the proximity aspect of the Principles of Attraction, it is important to note other principles. These are not in any specific order, but they are an important to consider to fully understand the principles of attraction. The other principles are the elaboration principle, the similarity principle, the complementarity principle, the reciprocity principle, and the minimax principle. References Notes General references Bornstein, R. F. (1989). Exposure and affect: Overview and meta-analysis of research, 1968-1987. Psychological Bulletin, 112, 265-289. Ducheneaut, N., Yee, N., Nickell, E., & Moore, R. (2006). "Alone together?" Exploring the social dynamics of massively multiplayer online games. CHI Proceedings: Games and Performance. Montreal: ACM. Gieryn, T.F. (2000). A space for place in sociology. Annual Review of Sociology, 26, 463-496. Moreland, R. L. (1987). The formation of small groups. Review of Personality and Social Psychology, 8, 80-110. Segal, M.W. (1974). Alphabet and attraction: An unobtrusive measure of the effect of propinquity in a field setting. Journal of Personality and Social Psychology, 30, 654-657. Intimate relationships
0.773391
0.975919
0.754767
Positive mental attitude
Positive mental attitude (PMA) is a concept first introduced in 1937 by Napoleon Hill in the book Think and Grow Rich. The book never actually uses the term, but discusses the importance of positive thinking as a contributing factor of success. Napoleon, who along with W. Clement Stone, founder of Combined Insurance, later wrote Success Through a Positive Mental Attitude, defines positive mental attitude as comprising the 'plus' characteristics represented by words as faith, integrity, hope, optimism, courage, initiative, generosity, tolerance, tact, kindliness and good common sense. Positive mental attitude is that philosophy which asserts that having an optimistic disposition in every situation in one's life attracts positive changes and increases achievement. Adherents employ a state of mind that continues to seek, find and execute ways to win, or find a desirable outcome, regardless of the circumstances. This concept is the opposite of negativity, defeatism and hopelessness. Optimism and hope are vital to the development of PMA. Positive mental attitude (PMA) is the philosophy of finding greater joy in small joys, to live without hesitation or holding back our most cherished, held in high esteem, and highest personal virtues and values. Empirical research suggests that individuals who engage in positive self-talk and maintain a mindful approach to their internal dialogues tend to exhibit greater self-control and resilience which is crucial for personal and professional growth, highlighting the significance of self-regulation and mindfulness in fostering a positive mental attitude. Furthermore, research on leadership strategies suggest that a positive mental attitude, characterized by a proactive approach to personal and organizational challenges, significantly improves leadership effectiveness and success in leadership roles. Psychology PMA is under the umbrella of positive psychology. In positive psychology, high self-efficacy can help in gaining learned optimism which ultimately leads to PMA. PMA is considered an internal focus of control that influences external factors. Research has shown that through emotional intelligence training and positive psychology therapy, a person's attitudes and perceptions can be modified to improve one's personal and professional life. Sports A study of Major League Baseball players indicated that a key component that separates major league players from the minor leagues and all other levels is their ability to develop mental characteristics and mental skills. Among them were mental toughness, confidence, maintaining a positive attitude, dealing with failure, expectations, and positive self-talk. Health Well-meaning friends in the US and similar cultures routinely encourage people with Disease to maintain a positive attitude. However, although a positive attitude confers some immediate advantages and is more comfortable for other people, it does not result in a greater chance of cure or longer survival times. A study done with HIV-positive individuals found that a high health self-efficacy, a task-oriented coping style, and a positive mental attitude were strong predictors for a health-promoting lifestyle which has a significant effect on overall health (coping and surviving). See also Creative thinking Creative Visualization (New Age) Law of attraction New Thought Self-fulfilling prophecy Self-help Toxic positivity Unconditional positive regard References 1937 introductions New Thought beliefs Personality
0.761632
0.990941
0.754733
Impression management
Impression management is a conscious or subconscious process in which people attempt to influence the perceptions of other people about a person, object or event by regulating and controlling information in social interaction. It was first conceptualized by Erving Goffman in 1959 in The Presentation of Self in Everyday Life, and then was expanded upon in 1967. Impression management behaviors include accounts (providing "explanations for a negative event to escape disapproval"), excuses (denying "responsibility for negative outcomes"), and opinion conformity ("speak(ing) or behav(ing) in ways consistent with the target"), along with many others. By utilizing such behaviors, those who partake in impression management are able to control others' perception of them or events pertaining to them. Impression management is possible in nearly any situation, such as in sports (wearing flashy clothes or trying to impress fans with their skills), or on social media (only sharing positive posts). Impression management can be used with either benevolent or malicious intent. Impression management is usually used synonymously with self-presentation, in which a person tries to influence the perception of their image. The notion of impression management was first applied to face-to-face communication, but then was expanded to apply to computer-mediated communication. The concept of impression management is applicable to academic fields of study such as psychology and sociology as well as practical fields such as corporate communication and media. Background The foundation and the defining principles of impression management were created by Erving Goffman in The Presentation of Self in Everyday Life. Impression management theory states that one tries to alter one's perception according to one's goals. In other words, the theory is about how individuals wish to present themselves, but in a way that satisfies their needs and goals. Goffman "proposed to focus on how people in daily work situations present themselves and, in so doing, what they are doing to others", and he was "particularly interested in how a person guides and controls how others form an impression of them and what a person may or may not do while performing before them". Theory Motives A range of factors that govern impression management can be identified. It can be stated that impression management becomes necessary whenever there exists a kind of social situation, whether real or imaginary. Logically, the awareness of being a potential subject of monitoring is also crucial. Furthermore, the characteristics of a given social situation are important. Specifically, the surrounding cultural norms determine the appropriateness of particular nonverbal behaviours. The actions have to be appropriate to the targets, and within that culture, so that the kind of audience as well as the relation to the audience influences the way impression management is realized. A person's goals are another factor governing the ways and strategies of impression management. This refers to the content of an assertion, which also leads to distinct ways of presentation of aspects of the self. The degree of self-efficacy describes whether a person is convinced that it is possible to convey the intended impression. A new study finds that, all other things being equal, people are more likely to pay attention to faces that have been associated with negative gossip than those with neutral or positive associations. The study contributes to a body of work showing that far from being objective, human perceptions are shaped by unconscious brain processes that determine what they "choose" to see or ignore—even before they become aware of it. The findings also add to the idea that the brain evolved to be particularly sensitive to "bad guys" or cheaters—fellow humans who undermine social life by deception, theft or other non-cooperative behavior. There are many methods behind self-presentation, including self disclosure (identifying what makes you "you" to another person), managing appearances (trying to fit in), ingratiation, aligning actions (making one's actions seem appealing or understandable), and alter-casting (imposing identities on other people). Maintaining a version of self-presentation that is generally considered to be attractive can help to increase one's social capital, and this method is commonly implemented by individuals at networking events. These self-presentation methods can also be used on the corporate level as impression management. Self-presentation Self-presentation is conveying information about oneself – or an image of oneself – to others. There are two types and motivations of self-presentation: presentation meant to match one's own self-image, and presentation meant to match audience expectations and preferences. Self-presentation is expressive. Individuals construct an image of themselves to claim personal identity, and present themselves in a manner that is consistent with that image. If they feel like it is restricted, they often exhibit reactance or become defiant – try to assert their freedom against those who would seek to curtail self-presentation expressiveness. An example of this dynamic is someone who grew up with extremely strict or controlling parental figures. The child in this situation may feel that their identity and emotions have been suppressed, which may cause them to behave negatively towards others. Boasting – Millon notes that in self-presentation individuals are challenged to balance boasting against discrediting themselves via excessive self-promotion or being caught and being proven wrong. Individuals often have limited ability to perceive how their efforts impact their acceptance and likeability by others. Flattery – flattery or praise to increase social attractiveness Intimidation – aggressively showing anger to get others to hear and obey one's demands. Self-presentation can be either defensive or assertive strategies (also described as protective versus acquisitive). Whereas defensive strategies include behaviours like avoidance of threatening situations or means of self-handicapping, assertive strategies refer to more active behaviour like the verbal idealisation of the self, the use of status symbols or similar practices. These strategies play important roles in one's maintenance of self-esteem. One's self-esteem is affected by their evaluation of their own performance and their perception of how others react to their performance. As a result, people actively portray impressions that will elicit self-esteem enhancing reactions from others. In 2019, as filtered photos are perceived as deceptive by users, PlentyOfFish along with other dating sites have started to ban filtered images. Social interaction Goffman argued in his 1967 book, Interaction ritual, that people participate in social interactions by performing a "line", or "pattern of verbal and nonverbal acts", which is created and maintained by both the performer and the audience. By enacting a line effectively, the person gains positive social value, which is also called "face". The success of a social interaction will depend on whether the performer has the ability to maintain face. As a result, a person is required to display a kind of character by becoming "someone who can be relied upon to maintain himself as an interactant, poised for communication, and to act so that others do not endanger themselves by presenting themselves as interactants to him". Goffman analyses how a human being in "ordinary work situations presents himself and his activity to others, the ways in which he guides and controls the impression they form of him, and the kinds of things he may and may not do while sustaining his performance before them". When Goffman turned to focus on people physically presented in a social interaction, the "social dimension of impression management certainly extends beyond the specific place and time of engagement in the organization". Impression management is "a social activity that has individual and community implications". We call it "pride" when a person displays a good showing from duty to himself, while we call it "honor" when he "does so because of duty to wider social units, and receives support from these duties in doing so". Another approach to moral standards that Goffman pursues is the notion of "rules of conduct", which "can be partially understood as obligations or moral constraints". These rules may be substantive (involving laws, morality, and ethics) or ceremonial (involving etiquette). Rules of conduct play an important role when a relationship "is asymmetrical and the expectations of one person toward another are hierarchical." Dramaturgical analogy Goffman presented impression management dramaturgically, explaining the motivations behind complex human performances within a social setting based on a play metaphor. Goffman's work incorporates aspects of a symbolic interactionist perspective, emphasizing a qualitative analysis of the interactive nature of the communication process. Impression management requires the physical presence of others. Performers who seek certain ends in their interest, must "work to adapt their behavior in such a way as to give off the correct impression to a particular audience" and "implicitly ask that the audience take their performance seriously". Goffman proposed that while among other people individual would always strive to control the impression that others form of him or her so that to achieve individual or social goals. The actor, shaped by the environment and target audience, sees interaction as a performance. The objective of the performance is to provide the audience with an impression consistent with the desired goals of the actor. Thus, impression management is also highly dependent on the situation. In addition to these goals, individuals differ in responses from the interactive environment, some may be non-responsive to an audience's reactions while others actively respond to audience reactions in order to elicit positive results. These differences in response towards the environment and target audience are called self-monitoring. Another factor in impression management is self-verification, the act of conforming the audience to the person's self-concept. The audience can be real or imaginary. IM style norms, part of the mental programming received through socialization, are so fundamental that we usually do not notice our expectations of them. While an actor (speaker) tries to project a desired image, an audience (listener) might attribute a resonant or discordant image. An example is provided by situations in which embarrassment occurs and threatens the image of a participant. Goffman proposes that performers "can use dramaturgical discipline as a defense to ensure that the 'show' goes on without interruption." Goffman contends that dramaturgical discipline includes: coping with dramaturgical contingencies; demonstrating intellectual and emotional involvement; remembering one's part and not committing unmeant gestures or faux pas; not giving away secrets involuntarily; covering up inappropriate behavior on the part of teammates on the spur of the moment; offering plausible reasons or deep apologies for disruptive events; maintaining self-control (for example, speaking briefly and modestly); suppressing emotions to private problems; and suppressing spontaneous feelings. Manipulation and ethics In business, "managing impressions" normally "involves someone trying to control the image that a significant stakeholder has of them". The ethics of impression management has been hotly debated on whether we should see it as an effective self-revelation or as cynical manipulation. Some people insist that impression management can reveal a truer version of the self by adopting the strategy of being transparent. Because transparency "can be provided so easily and because it produces information of value to the audience, it changes the nature of impression management from being cynically manipulative to being a kind of useful adaptation". Virtue signalling is used within groups to criticize their own members for valuing outward appearance over substantive action (having a real or permanent, rather than apparent or temporary, existence). Psychological manipulation is a type of social influence that aims to change the behavior or perception of others through abusive, deceptive, or underhanded tactics. By advancing the interests of the manipulator, often at another's expense, such methods could be considered exploitative, abusive, devious, and deceptive. The process of manipulation involves bringing an unknowing victim under the domination of the manipulator, often using deception, and using the victim to serve their own purposes. Machiavellianism is a term that some social and personality psychologists use to describe a person's tendency to be unemotional, and therefore able to detach him or herself from conventional morality and hence to deceive and manipulate others. (See also Machiavellianism in the workplace.) Lying constitutes a force that is destructive and can manipulate an environment allowing them to be narcissistic human beings. A person's mind can be manipulated into believing those antics are true as though it relates to being solely deceptive and unethical. Theories show manipulation can cause a huge effect on the dynamic of one's relationship. The emotions of a person can stem from a trait that is mistrustful, triggering one's attitude and character to misbehave disapprovingly. Relationships with a positive force can provide a greater exchange whereas with relationships having poor moral values, the chances of the connection will be based on detachment and disengagement. Dark personalities and manipulation are within the same entity. It will intervene between a person's attainable goal if their perspective is only focused on self-centeredness. The personality entices a range of erratic behaviors that will corrupt the mind into practicing violent acts resulting in a rage of anger and physical harm. Public relations Ethics. Professionals both serve the public's interest and private interests of businesses, associations, non-profit organizations, and governments. This dual obligation gave rise to heated debates among scholars of the discipline and practitioners over its fundamental values. This conflict represents the main ethical predicament of public relations.[40] In 2000, the Public Relations Society of America (PRSA) responded to the controversy by acknowledging in its new code of ethics "advocacy" – for the first time – as a core value of the discipline.[40] The field of public relations is generally highly un-regulated, but many professionals voluntarily adhere to the code of conduct of one or more professional bodies to avoid exposure for ethical violations.[41] The Chartered Institute of Public Relations, the Public Relations Society of America, and The Institute of Public Relations are a few organizations that publish an ethical code. Still, Edelman's 2003 semi-annual trust survey found that only 20 percent of survey respondents from the public believed paid communicators within a company were credible.[42] Individuals in public relations are growing increasingly concerned with their company's marketing practices, questioning whether they agree with the company's social responsibility. They seek more influence over marketing and more of a counseling and policy-making role. On the other hand, individuals in marketing are increasingly interested in incorporating publicity as a tool within the realm marketing.[43] According to Scott Cutlip, the social justification for public relations is the right for an organization to have a fair hearing of their point of view in the public forum, but to obtain such a hearing for their ideas requires a skilled advocate.[44] Marketing and communications strategist, Ira Gostin, believes there is a code of conduct when conducting business and using public relations. Public relations specialists have the ability to influence society. Fact-checking and presenting accurate information is necessary to maintain credibility with employers and clients.[45] Public Relations Code of Ethics The Public Relation Student Society of America has established a set of fundamental guidelines that people within the public relations professions should practice and use in their business atmosphere. These values are: Advocacy: Serving the public interest by acting as responsible advocates for the clientele. This can occur by displaying the marketplace of ideas, facts and viewpoints to aid informed public debate. Honesty: Standing by the truth and accuracy of all facts in the case and advancing those statements to the public. Expertise: To become and stay informed of the specialized knowledge needed in the field of Public Relations. Taking that knowledge and improving the field through development, research and education. Meanwhile, professionals also build their understanding, credibility, and relationships to understand various audiences and industries. Independence: Provide unbiased work to those that are represented while being accountable for all actions. Loyalty: Stay devoted to the client while remembering that there is a duty to still serve the public interest. Fairness: Honorably conduct business with any and all clients, employers, competitors, peers, vendors, media and general public. Respecting all opinions and right of free expression.[46] International Public Relations Code of Ethics Other than the ethics put in place in the United States of America there are also International ethics set to ensure proper and, legal worldwide communication. Regarding these ethics, there are broad codes used specifically for international forms of public relations, and then there are more specific forms from different countries. For example, some countries have certain associations to create ethics and standards to communication across their country. The International Association of Business Communication (founded in 1971),[47] or also known as IABC, has its own set of ethics in order to enforce a set of guidelines that ensure communication internationality is legal, ethical, and is in good taste. Some principles that members of the board of IABC follow include. Having proper and legal communication Being understanding and open to other people's cultures, values, and beliefs Create communication that is accurate, trusting, to ensure mutual respect and understanding The IABC members use the following list of ethics in order to work to improve values of communication throughout the world:[47] Being credible and honest Keeping up with information to ensure accuracy of communication Understanding free speech and respecting this right Having sensitivity towards other people's thoughts, beliefs, and way of life Not taking part in unethical behaviors Obeying policies and laws Giving proper credit to resources used for communication Ensuring private information is protected (not used for personal gain) and if publicized, guarantee proper legal measures will be put in place. Publishers of said communication do not accept gifts, benefits, payments etc.; for work, or their services Creating results and spreading results that are attainable and they can deliver. Being fully truthful to other people, and themselves. Media is a major resource in the public relations career especially in news networks. That is why as a public relations specialist, having proper information is very important, and crucial to the society as a whole. Spin Main article: Spin (public relations) Spin has been interpreted historically to mean overt deceit that is meant to manipulate the public, but since the 1950s has shifted to describing a "polishing of the truth."[48] Today, spin refers to providing a certain interpretation of information meant to sway public opinion.[49] Companies may use spin to create the appearance of the company or other events are going in a slightly different direction than they actually are.[48] Within the field of public relations, spin is seen as a derogatory term, interpreted by professionals as meaning blatant deceit and manipulation.[50][51] Skilled practitioners of spin are sometimes called "spin doctors." In Stuart Ewen's PR! A Social History of Spin, he argues that public relations can be a real menace to democracy as it renders the public discourse powerless. Corporations are able to hire public relations professionals and transmit their messages through the media channels and exercise a huge amount of influence upon the individual who is defenseless against such a powerful force. He claims that public relations is a weapon for capitalist deception and the best way to resist is to become media literate and use critical thinking when interpreting the various mediated messages.[52] According to Jim Hoggan, "public relations is not by definition 'spin'. Public relations is the art of building good relationships. You do that most effectively by earning trust and goodwill among those who are important to you and your business... Spin is to public relations what manipulation is to interpersonal communications. It's a diversion whose primary effect is ultimately to undermine the central goal of building trust and nurturing a good relationship."[53] The techniques of spin include selectively presenting facts and quotes that support ideal positions (cherry picking), the so-called "non-denial denial", phrasing that in a way presumes unproven truths, euphemisms for drawing attention away from items considered distasteful, and ambiguity in public statements. Another spin technique involves careful choice of timing in the release of certain news so it can take advantage of prominent events in the news. Negative See also: Negative campaigning Negative public relations, also called dark public relations (DPR), 'black hat PR' and in some earlier writing "Black PR", is a process of destroying the target's reputation and/or corporate identity. The objective in DPR is to discredit someone else, who may pose a threat to the client's business or be a political rival. DPR may rely on IT security, industrial espionage, social engineering and competitive intelligence. Common techniques include using dirty secrets from the target, producing misleading facts to fool a competitor.[54][55][56][57] In politics, a decision to use negative PR is also known as negative campaigning. Application Face-to-face communication Self, social identity and social interaction The social psychologist, Edward E. Jones, brought the study of impression management to the field of psychology during the 1960s and extended it to include people's attempts to control others' impression of their personal characteristics. His work sparked an increased attention towards impression management as a fundamental interpersonal process. The concept of self is important to the theory of impression management as the images people have of themselves shape and are shaped by social interactions. Our self-concept develops from social experience early in life. Schlenker (1980) further suggests that children anticipate the effect that their behaviours will have on others and how others will evaluate them. They control the impressions they might form on others, and in doing so they control the outcomes they obtain from social interactions. Social identity refers to how people are defined and regarded in social interactions. Individuals use impression management strategies to influence the social identity they project to others. The identity that people establish influences their behaviour in front of others, others' treatment of them and the outcomes they receive. Therefore, in their attempts to influence the impressions others form of themselves, a person plays an important role in affecting his social outcomes. Social interaction is the process by which we act and react to those around us. In a nutshell, social interaction includes those acts people perform toward each other and the responses they give in return. The most basic function of self-presentation is to define the nature of a social situation (Goffman, 1959). Most social interactions are very role governed. Each person has a role to play, and the interaction proceeds smoothly when these roles are enacted effectively. People also strive to create impressions of themselves in the minds of others in order to gain material and social rewards (or avoid material and social punishments). Cross-cultural communication Understanding how one's impression management behavior might be interpreted by others can also serve as the basis for smoother interactions and as a means for solving some of the most insidious communication problems among individuals of different racial/ethnic and gender backgrounds (Sanaria, 2016). "People are sensitive to how they are seen by others and use many forms of impression management to compel others to react to them in the ways they wish" (Giddens, 2005, p. 142). An example of this concept is easily illustrated through cultural differences. Different cultures have diverse thoughts and opinions on what is considered beautiful or attractive. For example, Americans tend to find tan skin attractive, but in Indonesian culture, pale skin is more desirable. It is also argued that Women in India use different impression management strategies as compared to women in western cultures (Sanaria, 2016). Another illustration of how people attempt to control how others perceive them is portrayed through the clothing they wear. A person who is in a leadership position strives to be respected and in order to control and maintain the impression. This illustration can also be adapted for a cultural scenario. The clothing people choose to wear says a great deal about the person and the culture they represent. For example, most Americans are not overly concerned with conservative clothing. Most Americans are content with tee shirts, shorts, and showing skin. The exact opposite is true on the other side of the world. "Indonesians are both modest and conservative in their attire" (Cole, 1997, p. 77). One way people shape their identity through sharing photos on social media platforms. The ability to modify photos by certain technologies, such as Photoshop, helps achieve their idealized images. Companies use cross-cultural training (CCT) to facilitate effective cross-cultural interaction. CCT can be defined as any procedure used to increase an individual's ability to cope with and work in a foreign environment. Training employees in culturally consistent and specific impression management (IM) techniques provide the avenue for the employee to consciously switch from an automatic, home culture IM mode to an IM mode that is culturally appropriate and acceptable. Second, training in IM reduces the uncertainty of interaction with FNs and increases employee's ability to cope by reducing unexpected events. Team-working in hospital wards Impression management theory can also be used in health communication. It can be used to explore how professionals 'present' themselves when interacting on hospital wards and also how they employ front stage and backstage settings in their collaborative work. In the hospital wards, Goffman's front stage and backstage performances are divided into 'planned' and 'ad hoc' rather than 'official' and 'unofficial' interactions. Planned front stage is the structured collaborative activities such as ward rounds and care conferences which took place in the presence of patients and/or carers. Ad hoc front stage is the unstructured or unplanned interprofessional interactions that took place in front of patients/carers or directly involved patients/carers. Planned backstage is the structured multidisciplinary team meeting (MDT) in which professionals gathered in a private area of the ward, in the absence of patients, to discuss management plans for patients under their care. Ad hoc backstage is the use of corridors and other ward spaces for quick conversations between professionals in the absence of patients/carers. Offstage is the social activities between and among professional groups/individuals outside of the hospital context. Results show that interprofessional interactions in this setting are often based less on planned front stage activities than on ad hoc backstage activities. While the former may, at times, help create and maintain an appearance of collaborative interprofessional 'teamwork', conveying a sense of professional togetherness in front of patients and their families, they often serve little functional practice. These findings have implications for designing ways to improve interprofessional practice on acute hospital wards where there is no clearly defined interprofessional team, but rather a loose configuration of professionals working together in a collaborative manner around a particular patient. In such settings, interventions that aim to improve both ad hoc as well as planned forms of communication may be more successful than those intended to only improve planned communication. Computer-mediated communication The hyperpersonal model of computer-mediated communication (CMC) posits that users exploit the technological aspects of CMC in order to enhance the messages they construct to manage impressions and facilitate desired relationships. The most interesting aspect of the advent of CMC is how it reveals basic elements of interpersonal communication, bringing into focus fundamental processes that occur as people meet and develop relationships relying on typed messages as the primary mechanism of expression. "Physical features such as one's appearance and voice provide much of the information on which people base first impressions face-to-face, but such features are often unavailable in CMC. Various perspectives on CMC have suggested that the lack of nonverbal cues diminishes CMC's ability to foster impression formation and management, or argued impressions develop nevertheless, relying on language and content cues. One approach that describes the way that CMC's technical capacities work in concert with users' impression development intentions is the hyperpersonal model of CMC (Walther, 1996). As receivers, CMC users idealize partners based on the circumstances or message elements that suggest minimal similarity or desirability. As senders, CMC users selectively self-present, revealing attitudes and aspects of the self in a controlled and socially desirable fashion. The CMC channel facilitates editing, discretion, and convenience, and the ability to tune out environmental distractions and re-allocate cognitive resources in order to further enhance one's message composition. Finally, CMC may create dynamic feedback loops wherein the exaggerated expectancies are confirmed and reciprocated through mutual interaction via the bias-prone communication processes identified above." According to O'Sullivan's (2000) impression management model of communication channels, individuals will prefer to use mediated channels rather than face-to-face conversation in face-threatening situations. Within his model, this trend is due to the channel features that allow for control over exchanged social information. The present paper extends O'Sullivan's model by explicating information control as a media affordance, arising from channel features and social skills, that enables an individual to regulate and restrict the flow of social information in an interaction, and present a scale to measure it. One dimension of the information control scale, expressive information control, positively predicted channel preference for recalled face-threatening situations. This effect remained after controlling for social anxiousness and power relations in relationships. O'Sullivan's model argues that some communication channels may help individuals manage this struggle and therefore be more preferred as those situations arise. It was based on an assumption that channels with features that allow fewer social cues, such as reduced nonverbal information or slower exchange of messages, invariably afford an individual with an ability to better manage the flow of a complex, ambiguous, or potentially difficult conversations. Individuals manage what information about them is known, or isn't known, to control other's impression of them. Anyone who has given the bathroom a quick cleaning when they anticipate the arrival of their mother-in-law (or date) has managed their impression. For an example from information and communication technology use, inviting someone to view a person's Webpage before a face-to-face meeting may predispose them to view the person a certain way when they actually meet. Corporate brand The impression management perspective offers potential insight into how corporate stories could build the corporate brand, by influencing the impressions that stakeholders form of the organization. The link between themes and elements of corporate stories and IM strategies/behaviours indicates that these elements will influence audiences' perceptions of the corporate brand. Corporate storytelling Corporate storytelling is suggested to help demonstrate the importance of the corporate brand to internal and external stakeholders, and create a position for the company against competitors, as well as help a firm to bond with its employees (Roper and Fill, 2012). The corporate reputation is defined as a stakeholder's perception of the organization (Brown et al., 2006), and Dowling (2006) suggests that if the story causes stakeholders to perceive the organization as more authentic, distinctive, expert, sincere, powerful, and likeable, then it is likely that this will enhance the overall corporate reputation. Impression management theory is a relevant perspective to explore the use of corporate stories in building the corporate brand. The corporate branding literature notes that interactions with brand communications enable stakeholders to form an impression of the organization (Abratt and Keyn, 2012), and this indicates that IM theory could also therefore bring insight into the use of corporate stories as a form of communication to build the corporate brand. Exploring the IM strategies/behaviors evident in corporate stories can indicate the potential for corporate stories to influence the impressions that audiences form of the corporate brand. Corporate document Firms use more subtle forms of influencing outsiders' impressions of firm performance and prospects, namely by manipulating the content and presentation of information in corporate documents with the purpose of "distort[ing] readers" perceptions of corporate achievements" [Godfrey et al., 2003, p. 96]. In the accounting literature this is referred to as impression management. The opportunity for impression management in corporate reports is increasing. Narrative disclosures have become longer and more sophisticated over the last few years. This growing importance of descriptive sections in corporate documents provides firms with the opportunity to overcome information asymmetries by presenting more detailed information and explanation, thereby increasing their decision-usefulness. However, they also offer an opportunity for presenting financial performance and prospects in the best possible light, thus having the opposite effect. In addition to the increased opportunity for opportunistic discretionary disclosure choices, impression management is also facilitated in that corporate narratives are largely unregulated. Media The medium of communication influences the actions taken in impression management. Self-efficacy can differ according to the fact whether the trial to convince somebody is made through face-to-face-interaction or by means of an e-mail. Communication via devices like telephone, e-mail or chat is governed by technical restrictions, so that the way people express personal features etc. can be changed. This often shows how far people will go. The affordances of a certain medium also influence the way a user self-presents. Communication via a professional medium such as e-mail would result in professional self-presentation. The individual would use greetings, correct spelling, grammar and capitalization as well as scholastic language. Personal communication mediums such as text-messaging would result in a casual self-presentation where the user shortens words, includes emojis and selfies and uses less academic language. Another example of impression management theory in play is present in today's world of social media. Users are able to create a profile and share whatever they like with their friends, family, or the world. Users can choose to omit negative life events and highlight positive events if they so please. Profiles on social networking sites Social media usage among American adults grew from 5% in 2005 to 69% in 2018. Facebook is the most popular social media platform, followed by Instagram, LinkedIn, and Twitter. Social networking users will employ protective self-presentations for image management. Users will use subtractive and repudiate strategies to maintain a desired image. Subtractive strategy is used to untag an undesirable photo on Social Networking Sites. In addition to un-tagging their name, some users will request the photo to be removed entirely. Repudiate strategy is used when a friend posts an undesirable comment about the user. In response to an undesired post, users may add another wall post as an innocence defense. Michael Stefanone states that "self-esteem maintenance is an important motivation for strategic self-presentation online." Outside evaluations of their physical appearance, competence, and approval from others determines how social media users respond to pictures and wall posts. Unsuccessful self-presentation online can lead to rejection and criticism from social groups. Social media users are motivated to actively participate in SNS from a desire to manage their online image. Online social media presence often varies with respect to users' age, gender, and body weight. While men and women tend to use social media in comparable degrees, both uses and capabilities vary depending on individual preferences as well perceptions of power or dominance. In terms of performance, men tend to display characteristics associated with masculinity as well as more commanding language styles. In much the same way, women tend to present feminine self-depictions and engage in more supportive language. With respect to usage across age variances, many children develop digital and social media literacy skills around 7 or 8 and begin to form online social relationships via virtual environments designed for their age group. The years between thirteen and fifteen demonstrate high social media usage that begins to become more balanced with offline interactions as teens learn to navigate both their online and in-person identities which may often diverge from one another. Social media platforms often provide a great degree of social capital during the college years and later. College students are motivated to use Facebook for impression management, self-expression, entertainment, communication and relationship maintenance. College students sometimes rely on Facebook to build a favorable online identity, which contributes to greater satisfaction with campus life. In building an online persona, college students sometimes engage in identity manipulation, including altering personality and appearance, to increase their self-esteem and appear more attractive to peers. Since risky behavior is frequently deemed attractive by peers, college students often use their social media profiles to gain approval by highlighting instances of risky behavior, like alcohol use and unhealthy eating. Users present risky behavior as signs of achievement, fun, and sociability, participating in a form of impression management aimed at building recognition and acceptance among peers. During middle adulthood, users tend to display greater levels of confidence and mastery in their social media connections while older adults tend to use social media for educational and supportive purposes. These myriad factors influence how users will form and communicate their online personas. In addition to that, TikTok has made an influence on college students and adults to create their own self-image on a social media platform. The positivity of this is that college students and adults are using this to create their own brand for business purposes and for entertainment purposes. This gives them a chance to seek the desires of stardom and build an audience for revenue. Media fatigue is a negative effect that is caused by the conveyance of social media presence. Social anxiety stems from low-self esteem which causes a strain of stress in one's self-identity that is perceived in the media limelight for targeted audiences. According to Marwick, social profiles create implications such as "context collapse" for presenting oneself to the audience. The concept of 'context collapse' suggests that social technologies make it difficult to vary self-presentation based on environment or audience. "Large sites such as Facebook and Twitter group friends, family members, coworkers, and acquaintances together under the umbrella term 'friends'." In a way, this context collapse is aided by a notion of performativity as characterized by Judith Butler. Political impression management Impression management is also influential in the political spectrum. "Political impression management" was coined in 1972 by sociologist Peter M. Hall, who defined the term as the art of marking a candidate look electable and capable (Hall, 1972). This is due in part to the importance of "presidential" candidates—appearance, image, and narrative are a key part of a campaign and thus impression management has always been a huge part of winning an election (Katz 2016). Social media has evolved to be part of the political process, thus political impression management is becoming more challenging as the online image of the candidate often now lies in the hands of the voters themselves. The evolution of social media has increased the way in which political campaigns are targeting voters and how influential impression management is when discussing political issues and campaigns. Political campaigns continue to use social media as a way to promote their campaigns and share information about who they are to make sure to lead the conversation about their political platform. Research has shown that political campaigns must create clear profiles for each candidate in order to convey the right message to potential voters. In the workplace In professional settings, impression management is usually primarily focused on appearing competent, but also involves constructing and displaying an image of oneself that others find socially desirable and believably authentic. People manage impressions by their choice of dress, dressing either more or less formally, and this impacts perceptions their coworkers and supervisors form. The process includes a give and take; the person managing their impression receives feedback as the people around them interact with the self they are presenting and respond, either favorably or negatively. Research has shown impression management to be impactful in the workplace because the perceptions co-workers form of one another shape their relationships and indirectly influence their ability to function well as teams and achieve goals together. In their research on impression management among leaders, Peck and Hogue define "impression management as conscious or unconscious, authentic or inauthentic, goal-directed behavior individuals engage in to influence the impression others form of them in social interactions." Using those three dimensions, labelled "automatic" vs. "controlled", "authentic" vs. "inauthentic", and "pro-self" vs. "pro-social", Peck and Hogue formed a typology of eight impression management archetypes. They suggest that while no one archetype stands out as the sole correct or ideal way to practice impression management as a leader, types rooted in authenticity and pro-social goals, rather than self-focused goals, create the most positive perceptions among followers. Impression management strategies employed in the workplace also involve deception, and the ability to recognize deceptive acts impacts the supervisor-subordinate relationship as well as coworker relationships. When it comes to workplace behaviors, ingratiation is the major focus of impression management research. Ingratiation behaviors are those that employees engage in to elicit a favorable impression from a supervisor. These behaviors can have a negative or positive impact on coworkers and supervisors, and this impact is dependent on how ingratiating is perceived by the target and those who observe the ingratiating behaviors. The perception that follows an ingratiation act is dependent on whether the target attributes the behavior to the authentic-self of the person performing the act, or to impression management strategies. Once the target is aware that ingratiation is resulting from impression management strategies, the target will perceive ethical concerns regarding the performance. However, if the target attributes the ingratiation performance to the actor's authentic-self, the target will perceive the behavior as positive and not have ethical concerns. Workplace leaders that are publicly visible, such as CEOs, also perform impression management with regard to stakeholders outside their organizations. In a study comparing online profiles of North American and European CEOs, research showed that while education was referenced similarly in both groups, profiles of European CEOs tended to be more professionally focused, while North American CEO profiles often referenced the CEO's public life outside business dealings, including social and political stances and involvement. Employees also engage in impression management behaviors to conceal or reveal personal stigmas. How these individuals approach their disclosure of the stigma(s) impacts coworker's perceptions of the individual, as well as the individual's perception of themselves, and thus affects likeability amongst coworkers and supervisors. On a smaller scale, many individuals choose to participate in professional impression management beyond the sphere of their own workplace. This may take place through informal networking (either face-to-face or using computer-mediated communication) or channels built to connect professionals, such as professional associations, or job-related social media sites, like LinkedIn. Implications Impression management can distort the results of empirical research that relies on interviews and surveys, a phenomenon commonly referred to as "social desirability bias". Impression management theory nevertheless constitutes a field of research on its own. When it comes to practical questions concerning public relations and the way organizations should handle their public image, the assumptions provided by impression management theory can also provide a framework. An examination of different impression management strategies acted out by individuals who were facing criminal trials where the trial outcomes could range from a death sentence, life in prison or acquittal has been reported in the forensic literature. The Perri and Lichtenwald article examined female psychopathic killers, whom as a group were highly motivated to manage the impression that attorneys, judges, mental health professions and ultimately, a jury had of the murderers and the murder they committed. It provides legal case illustrations of the murderers combining and/or switching from one impression management strategy such as ingratiation or supplication to another as they worked towards their goal of diminishing or eliminating any accountability for the murders they committed. Since the 1990s, researchers in the area of sport and exercise psychology have studied self-presentation. Concern about how one is perceived has been found to be relevant to the study of athletic performance. For example, anxiety may be produced when an athlete is in the presence of spectators. Self-presentational concerns have also been found to be relevant to exercise. For example, the concerns may elicit motivation to exercise. More recent research investigating the effects of impression management on social behaviour showed that social behaviours (e.g. eating) can serve to convey a desired impression to others and enhance one's self-image. Research on eating has shown that people tend to eat less when they believe that they are being observed by others. See also Calculating Visions: Kennedy, Johnson, and Civil Rights (book) Character mask Dignity Dramaturgy (sociology) First impression (psychology) Ingratiation Instagram's impact on people Online identity management On the Internet, nobody knows you're a dog Personal branding Register (sociolinguistics) Reputation capital Reputation management Self-monitoring theory Self-verification theory Signalling (economics) Spin (public relations) Superficial charm Stigma management Footnotes References Barnhart, Adam (1994), Erving Goffman: The Presentation of Self in Everyday Life Goffman, Erving (2006), Wir alle spielen Theater: Die Selbstdarstellung im Alltag, Piper, Munich. Dillard, Courtney et al. (2000), Impression Management and the use of procedures at the Ritz-Carlton: Moral standards and dramaturgical discipline, Communication Studies, 51. Döring, Nicola (1999), Sozialpsychologie des Internet: Die Bedeutung des Internet für Kommunikationsprozesse, Identitäten, soziale Beziehungen und Gruppen Hogrefe, Goettingen. Felson, Richard B (1984): An Interactionist Approach to Aggression, in: Tedeschi, James T. (Ed.), Impression Management Theory and Social Psychological Research Academic Press, New York. Sanaria, A. D. (2016). A conceptual framework for understanding the impression management strategies used by women in Indian organizations. South Asian Journal of Human Resources Management, 3(1), 25–39. https://doi.org/10.1177/2322093716631118 Hall, Peter (1972). "A Symbolic Interactionist Analysis of Politics." Sociological Inquiry 42.3-4: 35-75 Hass, Glen R. (1981), Presentational Strategies, and the Social Expression of Attitudes: Impression management within Limits, in: Tedeschi, James T. (Ed.): Impression Management Theory and Social Psychological Research, Academic Press, New York. Humphreys, A. (2016). Social media: Enduring principles. Oxford: Oxford University Press. Katz, Nathan (2016). "Impression Management, Super PACs and the 2012 Republican Primary." Symbolic Interaction 39.2: 175–95. Tedeschi, James T.; Riess, Marc (1984), Identities, the Phenomenal Self, and Laboratory Research, in: Tedeschi, James T. (Ed.): Impression Management Theory and Social Psychological Research, Academic Press, New York. Smith, Greg (2006), Erving Goffman, Routledge, New York. Rui, J. and M. A. Stefanone (2013). Strategic Management of Other-Provided Information Online: Personality and Network Variables. System Sciences (HICSS), 2013 46th Hawaii International Conference on. Self Social influence Sociology of technology Reputation management Majority–minority relations
0.760346
0.992612
0.754728
Business acumen
Business acumen, also known as business savviness, business sense and business understanding, is a combination of knowledge, skills, and experience that enables individuals to understand business situations, make sound decisions, and drive successful outcomes for an organization. It is also defined as "keenness and quickness in understanding and dealing with a business situation (risks and opportunities) in a manner that is likely to lead to a good outcome". It involves having a "big picture" view of the business, financial literacy, strategic thinking, problem-solving, and effective communication. The UK government considers business acumen to be a skill required by civil service staff with responsibilities in a contract management role. Additionally, business acumen is viewed as having emerged as a vehicle for improving financial performance and leadership development. Consequently, several types of strategies have developed around improving business acumen. Characteristics Executive level thinking In his 2012 book Seeing the Big Picture, Business Acumen to Build Your Credibility, Career, and Company, Kevin R. Cope states an individual who possesses business acumen views the business with an "executive mentality", with the ability to comprehend how the moving parts of a company work together to make to ensure success, and how financial metrics like profit margin, cash flow, and stock price reflect how well each of those moving parts is doing its job. Cope proposes that an individual who has the following five abilities could be described as someone having a strong sense of business acumen: Seeing the "big picture" of the organization—how the key drivers of the business relate to each other, work together to produce profitable growth, and relate to the job Understand important company communications and data, including financial statements Use knowledge to make good decisions Understand how actions and decisions impact key company measures and leadership objectives Effectively communicate ideas to other employees, managers, executives, and the public. Distinguishing traits Raymond R. Reilly of the Ross School of Business at the University of Michigan and Gregory P. Reilly of the University of Connecticut document traits that individuals with business acumen possess: An acute perception of the dimensions of business issues Ability to make sense out of complexity and an uncertain future Cognizance of the implications of a choice for all the affected parties Decisive Flexibility for further change if warranted in the future Thus, developing stronger business acumen means a more thoughtful analysis, clearer logic underlying business decisions, closer attention to key dimensions of implementation and operation, and more disciplined performance management. The ability to manage complexity also figures in the UK government's description of a business acumen attribute. Financial literacy Financial literacy is a comprehensive understanding of the drivers of growth, profitability, and cash flow; an organization's financial statements; key performance measures; and the implications of decisions on value creation. Financial literacy is necessary, but not sufficient, to establish business acumen. According to Perth Leadership Institute, Business acumen is based primarily on behavioral and experiential issues, not on formal learning or education like financial literacy. [...] Financial literacy is almost never the need for senior managers and high potentials. [...] The real need for these managers is to understand how their actions and their behavior impact their financial decision-making [and] financial outcomes at the unit and the corporate level. Business management and leadership Bob Selden observed a complementary relationship between business acumen and leadership. According to Selden, this relationship comprises the importance of nurturing both the development of strategic skills and that of good leadership and management skills in order for business leaders to achieve effectiveness. According to a study titled Business acumen: a critical concern of modern leadership development: Global trends accelerate the move away from traditional approaches, traditional leadership development approaches, which are said to rely on personality and competency assessments as the scientific core of their approach, are failing. The study's intended goal is reportedly to demonstrate the importance of business acumen in leadership-development approaches. Business acumen, according to this study, is projected to have an increasing impact on leadership development and HR agendas. Research into this relationship resulted in the creation of the Perth Leadership Outcome Model, which links financial outcomes to individual leadership traits. In a study that interviewed 55 global business leaders, business acumen was cited as the most critical competency area for global leaders. In their 2011 book, The Leadership Pipeline, Ram Charan, Stephen Drotter, and James Noel study the process and criteria for selecting a group manager, and suggest that the process and criteria are similar for selecting a CEO. According to them an obvious criterion for selecting a leader is well-developed business acumen. An organization full of high business acumen individuals can expect to see leaders with a heightened perspective that translates into an ability to inspire and excite the organization to achieve its potential. Development Programs designed to improve an individual or group's business acumen have supported the recognition of the concept as a significant topic in the corporate world. Executive Development Associates' 2009/2010 survey of Chief Learning Officers, Senior Vice Presidents of Human Resources, and Heads of Executive and Leadership Development listed business acumen as the second most significant trend in executive development. A 2011 report explores the impact of business acumen training on an organization in terms of intangibles and more tangible expressions of value. The findings support the notion that business acumen is a learned skill — developed on the job by learning the required skills from knowledge mentors while working in different employment positions. They also suggest that the learning process ranges widely, from structured internal company training programs, to an individual's self-chosen moves from one position to another. The combination of these reports and surveys indicate that business acumen is a learned skill of increasing importance within the corporate world. There are different types of business acumen development programs available: Business simulations A business simulation is another corporate development tool used to increase business acumen. Several companies offer business simulations as a way to educate mid-level managers and non-financial leaders within their organization on cash flow and financial-decision-making processes. Their forms can vary from computer simulations to boardgame-style simulations. Psychological assessments The advent of personal assessments for business acumen is based in the emerging theories of behavioral finance and attempts to correlate innate personality traits with positive financial outcomes. This method approaches business acumen not as entirely based in either knowledge or experience, but on the combination of these and other factors which comprise an individual's financial personality or "signature". References External links https://perthleadership.org/en/ Management Trade
0.760845
0.991947
0.754717
Participatory economics
Participatory economics, often abbreviated Parecon, is an economic system based on participatory decision making as the primary economic mechanism for allocation in society. In the system, the say in decision-making is proportional to the impact on a person or group of people. Participatory economics is a form of a socialist decentralized planned economy involving the collective ownership of the means of production. It is a proposed alternative to contemporary capitalism and centralized planning. This economic model is primarily associated with political theorist Michael Albert and economist Robin Hahnel, who describes participatory economics as an anarchist economic vision. The underlying values that parecon seeks to implement are: equity, solidarity, diversity, workers' self-management, efficiency (defined as accomplishing goals without wasting valued assets), and sustainability. The institutions of parecon include workers' and consumers' councils utilising self-managerial methods for decision-making, balanced job complexes, remuneration based on individual effort, and wide decentralized planning. In parecon, self-management constitutes a replacement for the mainstream conception of economic freedom, which Albert and Hahnel argue by its very vagueness has allowed it to be abused by capitalist ideologues. Albert and Hahnel claim that participatory economics has been practiced to varying degrees during the Russian Revolution of 1917, Spanish Revolution of 1936, and occasionally in South America. Work and distribution Balanced job complexes A balanced job complex is a collection of tasks within a given workplace that is balanced for its equity and empowerment implications against all other job complexes in that workplace. Compensation for effort and sacrifice (principle for distribution) Albert and Hahnel argue that it is inequitable and ineffective to compensate people on the basis of luck (e.g. skills or talents that owe to their birth or heredity), or by virtue of workers' productivity (as measured by the value of the goods they produce). Therefore, the primary principle of participatory economics is to reward workers for their effort and sacrifice. Additionally, participatory economics would provide exemptions from the compensation for effort principle. The starting point for the income of all workers in a participatory economy is an equal share of the social product. From this point, incomes for personal expenditures and consumption rights for public goods can be expected to diverge by small degrees, reflecting the choices that individuals make in between work and leisure time, and the level of danger and difficulty of a job as judged by their immediate workplace peers. Allocation of resource Albert and Hahnel argue that decentralized planning can achieve Pareto optimum, and does so under less restrictive assumptions than free market models (see: the first fundamental theorem of welfare economics). Their model incorporates both public goods and externalities, whereas markets do not achieve Pareto optimality when including these conditions. Facilitation boards In a proposed participatory economy, key information relevant to converging on an economic plan would be made available by Iteration Facilitation Boards (IFBs), which, based on proposals from worker/consumer councils and economic data, present indicative prices and economic projections at each round of the planning process. The IFB has no decision-making authority. In theory, the IFB's activity can consist mainly of computers performing the (agreed upon) algorithms for adjusting prices and forecasts, with little human involvement. Motivations (opposition to central planning and capitalism) Robin Hahnel has argued that "participatory planning is not central planning", stating "The procedures are completely different and the incentives are completely different. And one of the important ways in which it is different from central planning is that it is incentive compatible, that is, actors have an incentive to report truthfully rather than an incentive to misrepresent their capabilities or preferences." Unlike historical examples of central planning, the parecon proposal advocates the use and adjustment of price information reflecting marginal social opportunity costs and benefits as integral elements of the planning process. Hahnel has argued emphatically against Milton Friedman's a priori tendency to deny the possibility of alternatives: Friedman assumes away the best solution for coordinating economic activities. He simply asserts "there are only two ways of coordinating the economic activities of millions—central direction involving the use of coercion—and voluntary cooperation, the technique of the marketplace." [...] a participatory economy can permit all to partake in economic decision making in proportion to the degree they are affected by outcomes. Since a participatory system uses a system of participatory planning instead of markets to coordinate economic activities, Friedman would have us believe that participatory planning must fall into the category of "central direction involving the use of coercion." Albert and Hahnel have voiced detailed critiques of centrally-planned economies in theory and practice, but are also highly-critical of capitalism. Hahnel claims "the truth is capitalism aggravates prejudice, is the most inequitable economy ever devised, is grossly inefficient—even if highly energetic—and is incompatible with both economic and political democracy. In the present era of free-market triumphalism it is useful to organize a sober evaluation of capitalism responding to Friedman's claims one by one." Critique of markets Mainstream economists largely acknowledge the problem of externalities but believe they can be addressed either through Coasian bargaining or the use of Pigovian taxes—corrective taxes on goods that produce negative externalities. While Hahnel (and Albert) favour the use of Pigovian taxes as solutions to environmental problems within market economies (over alternatives such as the issuance of marketable permits), he is critical about the regressive incidence of such taxes. Firms in a market economy will seek to shift the costs of taxation onto their consumers. While this might be considered a positive development in terms of incentives—since it penalizes consumers for "dirty" consumption—it fails to achieve the polluter pays principle and would instead aggravate "economic injustice." Hahnel, therefore, recommends that pollution taxes be linked to cuts in regressive taxes such as social security taxes. Hahnel is also critical of the mainstream assumption that externalities are anomalous and, on the whole, insignificant to market efficiency; he asserts instead that externalities are prevalent—the rule rather than the exception—and substantial. Ultimately, Hahnel argues that Pigovian taxes, along with associated corrective measures advanced by market economists, fall far short of adequately or fairly addressing externalities. He argues such methods are incapable of attaining accurate assessments of social costs: Markets corrected by pollution taxes only lead to the efficient amount of pollution and satisfy the polluter pays principle if the taxes are set equal to the magnitude of the damage victims suffer. But because markets are not incentive compatible for polluters and pollution victims, markets provide no reliable way to estimate the magnitudes of efficient taxes for pollutants. Ambiguity over who has the property right, polluters or pollution victims, free rider problems among multiple victims, and the transaction costs of forming and maintaining an effective coalition of pollution victims, each of whom is affected to a small but unequal degree, all combine to render market systems incapable of eliciting accurate information from pollution victims about the damages they suffer, or acting upon that information even if it were known. Class and hierarchy Although parecon falls under left-wing political tradition, it is designed to avoid the creation of powerful intellectual elites or the rule of a bureaucracy, which is perceived as the major problem of the economies of the communist states of the 20th century. In their book Looking Forward Albert and Hanhel termed this situation 'coordinatorism'. Parecon advocates recognize that monopolization of empowering labor, in addition to private ownership, can be a source of class division. Thus, a three-class view of the economy (capitalists, coordinators, and workers) is stressed, in contrast to the traditional two-class view of Marxism. The coordinator class, emphasized in parecon, refers to those who have a monopoly on empowering skills and knowledge, and corresponds to the doctors, lawyers, managers, engineers, and other professionals in present economies. Parecon advocates argue that, historically, Marxism ignored the ability of coordinators to become a new ruling class in a post-capitalist society. Innovation Hahnel has also written a detailed discussion of parecon's desirability compared to capitalism with respect to incentives to innovate. In capitalism, patent laws, intellectual property rights and barriers to market entry are institutional features that reward individual innovators while limiting the use of new technologies. Hahnel notes that, in contrast, "in a participatory economy all innovations will immediately be made available to all enterprises, so there will never be any loss of static efficiency.". Criticism The market socialist David Schweickart suggests participatory economics would be undesirable even if it was possible: It is a system obsessed with comparison (Is your job complex more empowering than mine?), with monitoring (You are not working at average intensity, mate—get with the program), with the details of consumption (How many rolls of toilet paper will I need next year? Why are some of my neighbors still using the kind not made of recycled paper?) Other criticism raised by Schweickart include: Difficulty with creating balanced job complexes and ensuring they do not suffer from inefficiency. A system based on peer evaluation may not work as workers could slack off and there would be little incentive for colleagues to damage their relationships by giving them bad reviews. Alternatively it may cause workers to become suspicious of one another, undermining solidarity. A compensation system based on effort would be difficult to measure and would need to be based on an average rating system of effort. Parecon's compensation system would be overly egalitarian and likely cause resentment among workers who work harder while also discouraging them from putting in extra effort since they will gain no greater compensation. Parecon would likely produce an onerous and tiresome requirement to list off all things people want produced, which would likely suffer from uncertainty given people do not always know what they desire, as well as issues with how much information they should be required to supply and complexities with the negotiations required between worker and consumer councils. Theodore Burczak argues that it would be difficult for others to measure sacrifice in another's labor, which is largely unobservable. Planning Participatory economics would create a large amount of administrative work for individual workers, who would have to plan their consumption in advance, and a new bureaucratic class. Proponents of parecon argue that capitalist economies are hardly free of bureaucracy or meetings, and a parecon would eliminate banks, advertising, stock market, tax returns and long-term financial planning. Albert and Hahnel claim that it is probable that a similar number of workers will be involved in a parecon bureaucracy as in a capitalist bureaucracy, with much of the voting achieved by computer rather than meeting, and those who are not interested in the collective consumption proposals not required to attend. Critics suggest that proposals require consideration of an unfeasibly large set of policy choices, and that lessons from planned societies show that peoples' daily needs cannot be established well in advance simply by asking people what they want. Albert and Hahnel note that markets themselves hardly adjust prices instantaneously, and suggest that in a participatory economy facilitation boards could modify prices on a regular basis. According to Hahnel these act according to democratically decided guidelines, can be composed of members from other regions and are impossible to bribe due to parecon's non-transferable currency. However, Takis Fotopoulos argues that "no kind of economic organisation based on planning alone, however democratic and decentralized it is, can secure real self-management and freedom of choice." See also Anarchist economics Anarcho-syndicalism Collaborative e-democracy Co-operative Economic democracy Inclusive Democracy Libertarian municipalism Participatory politics Social justice References Further reading A Quiet Revolution In Welfare Economics, Albert and Hahnel, Princeton University Press, 1990. Looking Forward: Participatory Economics for the Twenty First Century, Albert and Hahnel, South End Press, 1991. The Political Economy of Participatory Economics, Albert and Hahnel, Princeton University Press, 1991. Moving Forward: Program for a Participatory Economy, Albert, AK Press, 1997. Parecon: Life After Capitalism, Albert, Verso Books, 2003. Economic Justice And Democracy: From Competition To Cooperation, Hahnel, Routledge, 2005. Realizing Hope: Life Beyond Capitalism, Albert, Zed Press, 2006. Real Utopia: Participatory Society for the 21st Century, Chris Spannos (Ed.), AK Press, 2008. Takis Fotopoulos (2003), Inclusive Democracy and Participatory Economics, Democracy & Nature, Volume 9, Issue 3 November 2003, pp. 401–25 – a comparison with Inclusive Democracy Rameez Rahman, Michel Meulpolder, David Hales, Johan Pouwelse, Henk Sips (2009), "Revisiting Social Welfare in P2P", Delft University of Technology Report. – applying Participatory Economics principles to analysis of peer-to-peer computing systems Anarchist economic schools Criticism of intellectual property Economic democracy Economic ideologies Economic planning Economic systems Economics Social anarchism Socialism
0.765783
0.985545
0.754713
Transactionalism
Transactionalism is a pragmatic philosophical approach to questions such as: what is the nature of reality; how we know and are known; and how we motivate, maintain, and satisfy our goals for health, money, career, relationships, and a multitude of conditions of life through mutually cooperative social exchange and ecologies. It involves the study and accurate thinking required to plan and utilize one's limited resources in the fundamental mechanics of social exchange or trans-action. To transact is learning to beat the odds or mitigate the common pitfalls involved with living a good and comfortable life by always factoring in the surrounding circumstances of people, places, things and the thinking behind any exchange from work to play. In our complex, ever-changing society with its indifferent marketplace, we cannot thrive without requesting or inviting the help of others and offering help to those around us. To co-create a healthy exchange of value for all involved, we must understand and apply the fundamental mechanics of transaction. [This is not to be confused with the favor or advantage of quid pro quo.] Without cooperative exchange, we resist transacting to survive the unavoidable biological, societal, and environmental threats that can prevent us from comfort and ease in any of the multiple conditions of life we labor to maintain (cf. Hannah Arendt's philosophy of labor, work, and action). In this philosophy, human interactions are best understood as a set of simple to complex transactions. A transaction is a reciprocal and co-constitutive cycle of moves (what to do) and phases (or implemented tactics) aimed at satisfying (or at learning to become fit) in the multiple and interlocking conditions of life including health, work, money, knowledge, education, career, ethics, and more. If we work ourselves to death or ignore accurate thinking about our relationships, without help those conditions of life will eventually threaten our health, career, and money, for example. We must transact to maintain multiple and unavoidable conditions of our lives. A transactionalist approach demands an "un-fractured observation" of life as an organism that is influenced by and is influencing its environment or ecology. By considering the self as an organism inseparable from its environment, hyphenated as "organism-environment," we begin to recognize that any outcome is "determined by prior causes and articulated ends" not merely the intention or the end goal of an individual. This philosophical approach has correlation to Hannah Arendt's notion of human being as "political animal" ("Zoon Politikon") that should attend to the "labor, work, and action" beyond merely articulating an aspiration or a goal. It is critical that an organism-environment keep in mind how "consequences and outcomes" determine the satisfaction of any human endeavor. We must take into account that we, as a human being in transaction, are embedded in and constituted by not only our intentions, but simultaneously by the specific circumstances of our biology, our narratives in exchange, and the social situation that includes tangible resources like tools and settings, intangible resources like time and meaning, and the human resources of other people and their personalities and roles within a transaction or social exchange. Beyond our conscious awareness, three aspects of experience — the observer, the process of observing, and the thing observed in a situation— are all "affected by whatever merits or defects [the organism or environment] may prove to have when it is judged". A transactionalist holds that all human acts, including learning, are best understood as "entities" within a larger, often under-examined, transactional whole. The transactional whole is shaped by our health as an organism as well as the health of others (e.g., our biology as a living organisms), for example. Transactional competence is shaped by language and communication with others (e.g.,linguistic narratives). It is shaped and affected by one's fitness in satisfying an ethical exchange of business or education in certain conditions of life, such as reputation, politics (small and large), and ethics—how we treat one another or regulate our behavior and feelings. Human satisfaction is shaped first and foremost by our body's state of wellness or disease, which is inescapably linked to the ecology, shared and/or invented norms and values, and the fitness of our ability to understand the mechanics of trans-acting. We must make real the conditions and accept the consequences of what it takes to live a satisfying life in an ever-changing body and world. Transactionalism functions as a means of "controlled inquiry" into the complex nature and interactions of daily life. Overview In their 1949 book Knowing and the Known, transactionalists John Dewey and Arthur Bentley explained that they were "willing under hypothesis to treat all [human] behavings, including [their] most advanced knowings, as activities not of [them]self alone, nor even as primarily [theirs], but as processes of the full situation of organism-environment." John Dewey used the term "trans-action" to "describe the process of knowing as something that involves the full situation of organism-environment, not a mere inter-action between two independent entities, e.g., the observer and the object observed." A "trans-action" (or simply a "transaction") rests upon the recognition that subject (the observer) and object (the observed) are inseparable; "Instead, observer and observed are held in close organization. Nor is there any radical separation between that which is named and the naming." A knower (as "subject") and what they know (as "object" that may be human, tangible, or intangible) are inseparable and must be understood as inseparable to live a truly satisfying life. Dewey and Bentley distinguished the "trans-actional" point of view (as opposed to a "self-actional" or "inter-actional" one) in their preface: The transactional is in fact that point of view which systematically proceeds upon the ground that knowing is co-operative and as such is integral with communication. By its own processes it is allied with the postulational. It demands that statements be made as descriptions of events in terms of durations in time and areas in space. It excludes assertions of fixity and attempts to impose them. It installs openness and flexibility in the very process of knowing. It treats knowledge as itself inquiry—as a goal within inquiry, not as a terminus outside or beyond inquiry.The metaphysics and epistemology of living a satisfactory life begins with the hypothesis that man is an "organism-environment" solving problems in and, through a necessary exchange with others. Therefore, attention must always be paid to organizing acts as aspects or entities within a reciprocal, co-constitutive, and ethical exchange, whether it be in co-operative buying and selling; teaching and learning; marital trans-actions; or in any social situation where human beings engage one another. Definition Stemming from the Latin transigere ("to drive through", "to accomplish"), the root word "transaction" is not restricted to (or to be collapsed with) the economic sense of buying and selling or merely associated with a financial transaction. A much larger field of exchange is employed and summoned up here; such as, "any sort of social interaction, such as verbal communication, eye contact, or touch. A 'stroke' [of one's hand] is an act of recognition of a transaction" as described in psychological transactional analysis It not only examines exchanges, or "transactions," between borrower and lender, but encompasses any transaction involving people and objects including "borrowing-lending, buying-selling, writing-reading, parent-child, and husband-wife [or partners in a civil or marital union]." A transaction, then is "a creative act, engaged in by one who, by virtue of [their] participation in the act – of which [they are] always an aspect, never an entity – together with the other participants, be they human or otherwise environmental, becomes in the process modified" by and through exchange with others. Background Main contributors While John Dewey is viewed by many transactionalists as its principal architect, social anthropologist Fredrik Barth was among the first to articulate the concept as it is understood in contemporary study. Political scientists Karl W. Deutsch and Ben Rosamond have also written on the subject. In 1949, Dewey and Bentley offered that their sophisticated pragmatic approach starts from the perception of "man" as an organism that is always transacting within its environment; that it is sensible to think of our selves as an organism-environment seeking to fulfill multiple necessary conditions of life "together-at-once". It is a philosophy purposefully designed to correct the "fragmentation of experience" found in the segmented approaches of Subjectivism, Constructivism, Objectivism (Ayn Rand), and Skepticism.[1] Each of these approaches are aspects of problem-solving used by the transactionalist to examine the invention, construction of a narrative presentation, the objective work or activity that must happen, and the deconstruction of a transaction to fully observe and assess the consequences and outcomes of any transaction—from simply to complex—in the process of living a good and satisfying life. Dewey asserted that human life is not actually organized into separate entities, as if the mind (its sense of emotion, feeling, invention, imagination, or judgment) and the world outside it (natural and manufactured goods, social roles and institutions including the family, government, or media) are irreconcilable, leading to the question "How does the mind know the world?" Transactionalist analysis is a core paradigm advanced by social psychologist Eric Berne in his book Games People Play, in which an analyst seeks to understand an individual as "embedded and integrated" in an ever-evolving world of situations, actors, and exchange. The situational orientation of transactionalist problem-solving has been applied to a vast array of academic and professional discourses including educational philosophy in the humanities; social psychology, political science, and political anthropology in the social sciences; and occupational science in the health sciences; cognitive science, zoology, and quantum mechanics in the natural sciences; as well as the development of a transactional competence in leadership-as-practice in business management. Historical antecedents Galileo refused to seek the causes of the behavior of physical phenomena in the phenomena alone and sought the causes in the conditions under which the phenomena occur.The evolution of philosophy from aristotelian thought to galilean thinking shifts the focus from behavior to the context of the behavior in problem-solving. The writing of John Dewey and Arthur Bentley in Knowing and the Known offers a dense primer into transactionalism, but its historical antecedents date back to Polybius and Galileo. Trevor J. Phillips (1927–2016), American professor emeritus in educational foundations and inquiry at Bowling Green State University from 1963 to 1996, wrote a comprehensive thesis documenting the historical, philosophical, psychological, and educational development of transactionalism in his 1966 dissertation "Transactionalism: An Historical and Interpretive Study" published in 2013 by business education called Influence Ecology. Phillips traced transactionalism's philosophical roots to Greek historians such as Polybius and Plato as well as 17th century polymath Galileo—considered the architect of the scientific revolution and René Descartes—considered the architect of modern western philosophy. Galileo's contributions to the scientific revolution rested on a transactionalist understanding from which he argued Aristotelian physics was in error, as he wrote in Dialogue Concerning the Two Chief World Systems (1632):"[I]f it is denied that circular motion is peculiar to celestial bodies, and affirmed to belong to all naturally movable bodies, then one must choose one of two necessary consequences. Either the attributes of generable-ingenerable, alterable-inalterable, divisible-indivisible, etc., suit equally and commonly all world bodies – as much the celestial as the elemental – or Aristotle has wrongly and erroneously deduced, from circular motion, those attributes which he has assigned to celestial bodiesTransactionalism abandons self-actional and inter-actional beliefs or suppositions that lead to incomplete problem-solving. In a world of subjective and objective information, co-operative exchange creates value in learning and becomes the foundation of a transactional competence based on recurrent inquiry into how objects (including people) behave as situations constantly evolve. Galileo deviated from the then-current Aristotelian thinking, which was defined by mere interactions rather than co-constitutive transacting among persons with different interests or among persons who may be solving competing intentions or conditions of life. Modern antecedents Trevor Phillips also outlined the philosophy's more recent developments found in the American philosophical works of Charles Sanders Peirce, sociologist George Herbert Mead (symbolic interactionism), pragmatist philosophers William James and John Dewey, and political scientist Arthur Bentley. Several sources credit anthropologist Fredrik Barth as the scholar first to apply the term 'transactionalism" in 1959. In a critique of structural functionalism, Barth offered a new interpretation of culture that did not portray an overly cohesive picture of society without attending to the "roles, relationships, decisions, and innovations of the individual." Humans are transacting with one another at the multiple levels of individual, group, and environment. Barth's study appears to not fully articulate how this is happening all-at-once as opposed to as-if they were separate entities interacting independently ("interactional"):[T]he "environment" of any ethnic group is not only defined by natural conditions, but also by the presence and activities of other ethnic groups on which it depends. Each group exploits only a section of the total environment, and leaves large parts of it open for other groups to exploit.Using examples from the people of the Swat district of North Pakistan and, later, in 1966, organization taking place among Norwegian fishermen, Barth set out to demonstrate that social forms like kinship groups, economic institutions, and political alliances are generated by the actions and strategies of the individuals who deploy organized acts against (or within) a context of social constraints. "By observing how people interact with each other [through experience], an insight could be gained into the nature of the competition, values[,] and principles that govern individuals' choices." Utilized as a "theoretical orientation" in Norwegian anthropology, describes transactionalism as "process analysis" (prosessanalyse) categorized as a sociological theory or method. Though criticized for paying insufficient attention to cultural constraints on individualism, Barth's orientation influenced the qualitative method of symbolic interactionism applied throughout the social sciences. Process analysis considers the gradual unfolding of the course of interactions and events as key to understanding social situations. In other words, the transactional whole of a situation is not readily apparent at the level of individuals. At that level, an individual operates in a self-actional manner when much larger forces of sociality, history, biology, and culture are, all-at-once, at work on an individual as part of a global dynamic. Humans can never exist outside this dynamic current, as if they are operating the system in some self-actional or interactional way. Barth's approach reflects the co-constitutive nature of living in ever-evolving circumstances. 21st century applications Transactional leadership (LAP) In a new model of organizational management known as "leadership-as-practice" (LAP), Dewey and Bentley's Knowing and the Known categories of action—namely, self-action, inter-action, and trans-action–brings transactionalism into the corporate culture. A transactional leadership practice is defined by its "trans-actors" who "enact new and unfolding meanings in on-going trans-actions." Actors operating "together-at-once" in a transaction is contrasted with the older model of leadership defined by the practices of actors operating in self-actional or inter-actional way. In the former models, often the actors and situations remain unchanged by leadership interventions over time because the actors and situations remain unchanged. In leadership-as-practice, Joseph A. Raelin distinguishes between a "practice" that extends and amplifies the meaning of work and its value vs. "practices" that are habitual and sequential activities evoked to simplify everyday routines. A transactional approach—leadership-as-practice—focuses attention on "existing entanglements, complexities, processes, [while also] distinguishing problems in order to coordinate roles, acts, and practices within a group or organization." Said another way, "trans-action attends to emergent becoming"—a kind of seeing together--"rather than substantive being" among the actors involved. Transactional competence Modern architects of the philosophy, John Patterson and Kirkland Tibbels, co-founders of Influence Ecology, acquired, edited, and published Phillips' dissertation (as is) in 2013. With a foreword written by Tibbels, a hardback and Kindle version was published under the title Transactionalism: An Historical and Interpretive Study (2013). The monograph is an account of how human phenomena came to be viewed less as the behavior of static and/or mutually isolated entities, and more as dynamic aspects of events in the process of problem-solving, and thereby becoming or satisfying, the unavoidable and inescapable conditions of human life. Philosophy Metaphysics: transactional (vs. self-actional or interactional) The transactional view of metaphysics—studying the nature of reality or what is real—deals with the inseparability of what is known and how humans inquire into what is known—both knowing and the known. Since the age of Aristotle, humans have shifted from one paradigm or system of "logic" to another before a transactional metaphysics evolved with a focus that examines and inquires into solving problems first and foremost based on the relationship of man as a biological organism (with a brain and a body) shaped by its environment. In the book Transactionalism (2015), the nature of reality is traced historically from self-action to interaction to transactional competence each as its own age of knowing or episteme. The pre-Galilean age of knowing is defined by self-action "where things [and thereby people] are viewed as acting on their own powers." In Knowing and the Known, Dewey and Bentley wrote, "The epistemologies, logics, psychologies and sociologies [of our day] are still largely [understood] on a self-actional basis." The result of Newtonian physics, interaction marks the second age of knowing; a system marked especially by the "third 'law of motion'—that action and reaction are equal and opposite". The third episteme is transactional competence. With origins in the contributions of Darwin, "man's understandings are finite as opposed to infinite. In the same way, his views, goals, commitments, and beliefs have relative status as opposed to absolute." John Dewey and Arthur Bentley asserted this competence as "the right to see together, extensionally and durationally, much that is talked about conventionally as if it were composed of irreconcilable separates." We tend to avoid considering our actions as part of a dynamic and transactional whole, whether in mundane or complex activities; whether in making an invitation, request, or offer or in the complex management of a program or company. We tend to avoid studying, thinking, and planning our moves and moods for a comprehensive, reciprocal, and co-constitutive—in other words, transactional—whole. A transactional whole includes the organized acts including ideas, narratives, people as resources implementing ideas, services, and products, the things involved, settings, and personalities, all considered in and over time. With this competence, that which acts and is acted upon become united for a moment in a mutual or ethical exchange, where both are reciprocally transformed contradicting "any absolute separation or isolation" often found in the dualistic thinking and categorization of Western thought. Dualistic thinking and categorization often lead to over-simplification of the transactional whole found in the convenient but ineffective resorting to "exclusive classifications." Such classifications tend to exclude and reify man as if he has dominion over his nature or the environment. In his seminal 20th century work Physics and Philosophy, Werner Heisenberg reflects this kind of transactionalist thinking: "What we observe is not nature itself, but nature exposed to our method of questioning." The together-at-once reality of man as organism-environment is often overlooked in the dualistic thinking of even major philosophers like Descartes who is often referenced for his "I think, therefore I am" philosophy. Of a transactionalist approach, Heisenberg writes, "This was a possibility of which Descartes could not have thought, but it makes the sharp separation of the world and I impossible." Dualistic thinking prevents man from thinking. "In the spirit of [Charles Sanders] Peirce, transactionalism substitutes continuity for discontinuity, change and interdependence for separateness." For example, in problem solving, whenever we "insert a name instead of a problem," when words like "soul," "mind," "need," "I.Q." or "trait" are expressed as if real, they have the power to block and distort free inquiry into what is known in fact or as fact in the transactional whole. In the nature of change and being, "that which acts and that which is acted upon" always undergo a reciprocal relationship that is affected by the presence and influence of the other. We as human beings, as part of nature as an organism "integral to (as opposed to separate from, above or outside of) any investigation and inquiry may use a transactionalist approach to expand our personal knowledge so as to solve life's complex problems. The purpose of transactionalism is not to discover what is already there, but for a person to seek and interpret senses, objects, places, positions, or any aspect of transactions between one's Self and one's environment (including objects, other people, and their symbolic interactions) in terms of the aims and desires each one needs and wants to satisfy and fulfill. It is essential that one simultaneously take into account the needs and desires of others in one's environment or ecology to avoid the self-actional or self-empowerment ideology of a rugged and competitive individualism. While other philosophies may discuss similar ethical concerns, this co-constitutive and reciprocal element of problem-solving is central to transactionalism. To put it simply, "to experience is to transact; in point of fact, experience is a transaction of organism-environment." In other words, what is "known" by the knower (or organism) is always filtered and shaped by both internal and external moods and narratives, mirrored in and through our relationships to the physical affordances and constraints in our environment or in specific ecologies. The metaphysics of transactional inquiry is characterized in the pragmatic writing of William James who insists that "single barreled terms," terms like "thought" and "thing," actually stop or block inquiries into what is known and how we know it. Instead, a transactional orientation of 'double-barreledness' or the "interdependence of aspects of experience" must always be considered. James offers his readers insight into the "double-barreledness" of experience with an apt proposition:Is the preciousness of a diamond a quality of the gem [the thing] or is it a feeling in our mind [the thought]? Practically we treat it as both or as either, according to the temporary direction of our thought. The 'experienced' and the 'experiencing,' the 'seen' and the 'seeing,' are, in actuality, only names for a single fact. What is real then, from a transactionist perspective, must be constantly reevaluated relative to man as organism-environment in a co-constitutive and reciprocal dynamic with people, personalities, situations, aims, and given the needs each party seeks to satisfy. Epistemology: truth from inquiry Transactionalists are firmly intolerant of "anything resembling an 'ultimate' truth – or 'absolute' knowledge." Humankind has the propensity to treat the mind and thought or the mind and body as abstractions and this tendency to deny the interrelatedness or coordinated continuity results in misconceptions in learning and inaccurate thinking as humans move and thrive with an ecology. Accurate thinking and learning begins and is constantly developed through action resulting from thought as a repetitive circuit of experience known in psychology as deliberate practice. Educational philosopher Trevor Phillips, now deceased, frames this tendency to falsely organize our perception: "[W]e fail to realize that we can know nothing about things [or ourselves] beyond their significance to us," otherwise we distort our "reality" and treat things we perceive within it, including our bodies or mind, as if concrete thereby "denying the interconnectedness of realities" (plural). Transactionalists suggest that accurate (or inaccurate) thinking is rarely considered an unintended consequence of our propensity for abstractions. When an individual transacts through intelligent or consequential actions circumscribed within the constraints and conditions of her/his environment in a reflexive, repetitive arc of learned experience, there is a "transaction between means and ends" (see reference below). This transactional approach features twin aspects of a larger event rather than merely manipulating the means to an end in our circumstances and situations. For instance, a goal can never be produced by abstraction, by simply thinking about or declaring a promise to produce a result. Nor can it be anticipated or foreseen (an abstraction at best) without a significant "pattern of inquiry," as John Dewey later defined and articulated, into the constraints and conditions that happen and are happening given the interdependence of all the people and objects involved in a simple or complex transaction. The nature of our environment affects all these entities within a transaction. Thus, revealing the limiting and reductive notion of manipulating a psychology around stimulus and response found in Aristotilian or Cartesian thought. A transaction is recognized here as one that occurs between the "means and ends;" in other words, transactional competence is derived from the "distinctions between the how, the what (or subject-matter), and the why (or what for)." This transactional whole constitutes a reciprocal connection and a reflexive arc of learned and lived experience. From a transactional approach one can derive a certain kind of value from one's social exchange. Value in knowing how, what, and why the work done with your mind and body fulfill on the kinds of transactions needed to live a good and satisfying life that functions well with others. Truth from actual inquiry is foundational for organism-environment to define and live by a set of workable ethical values that functions with others. Due to the evolution of psychology about the nature of man, transactionalists also reject the notion of a mind-body split or anything resembling the bifurcation of what they perceive as the circuitry in which our biological stimulus-response exists. Examples transactionalists reject include the self-acting notions of Aristotle who posited that "the soul – the psyche – realized itself in and through the body, and that matter and form were two aspects involved in all existence." Later, the claims of French philosopher René Descartes, recognized as the father of modern Western philosophy, were examined and defined as "interactional". Descartes suggested stimulus-response as the realm where the mind controls the body and the body may influence the rational mind out of the passion of our emotions. Transactionalists recognize Cartesian dualism as a form of disintegrating the transactional whole of man "into two complete substances, joined to another no one knows how." The body as a physical entity, on the one hand, and the soul or thought, on the other, was regarded in a Cartesian mindset as "an angel inhabiting a machine and directing it by means of the pineal gland" This tranactionalists reject. Ethics: reciprocal and co-constitutive While self-interest governs the ethical principles of Objectivism, here the principle is that man as an organism is in a reciprocal, constitutive relationship with her/his environment. Disabusing the psychological supposition of our "skin-boundedness" (discussed further below), transactionalism rejects the notion that we are apart from our environment or that man has dominion over it. Man, woman, and child must view life and be viewed in the undifferentiated whole of organism-environment. This reciprocal and co-constitutive relationship is what sets Transactionalism apart from other philosophies. What John Dewey meant by "reciprocal" was that:... consequences have to be determined on the grounds of what is selected and handled as means in exactly the same sense in which the converse holds and demands constant attention if activities are to be intelligently conducted.In order for a human being to know, in order for a human being to acquire intelligence, it must learn to relate to its Self as part of, not separate from the internal and/or external environments in which it lives as an organism-environment. Whether the environment is natural or human-made, whether discussing biology, sociology, culture, linguistics, history and memory, or economics and physics, every organism-environment is reciprocal, constitutive, socially-conditioned and constantly in flux demanding our ethical attention to conditions and consequences as we live life. John Dewey and Arthur Bentley, like Charles Sanders Peirce before them, were out to distinguish an ethical "living" logic rather than a static one. Both rejected the supposition that man had dominion over or governed behavior in his/her environment embracing a presupposition of transactionalism; we are reciprocal, co-constitutive, socially-conditioned, and motivated "together-at-once" as we seek solutions to living a good life. Transactionalists reject the "localization" of our psychology as if "skin-bound." Bentley wrote, "No creature lives merely under its skin." In other words, we should not define and distinguish experience in and from the subjective mind and feelings. Conversely, we cannot rely solely on external circumstances or some static or inherited logic. Galileo said of followers of Aristotle in seeking ethical knowledge that one should "come with arguments and demonstrations of your own...but bring us no more texts and naked authorities, for our disputes are about the sensible world and not a paper one." Humans are always transacting, "together-at-once," part of, shaped by, and shap-ing the experience we call "knowledge" as an organism-environment. Dewey and Bentley were intrigued by, and ultimately questioned, "the significance of the concept 'skin' and its role in philosophical and psychological thought." They offered a biological or natural justification that came to define a transactionalist approach. The known and what is known are both a function of man having "evolved among other organisms" within natural selection or evolution. Man's most intellectual and advanced "knowings" are not merely outgrowths of his own doing or being. The natural evolution of things outside our knowingness creates the very context in which our known and knowings arise. We are not inventing what is known outside or, in a vacuum beyond, who we are and who we are is an organism-environment together-at-once. We are not creatures separated by skin with an internal world of the mind and body "in here" separate from an environment of objects and people "out there". Human beings intelligently live, adapt to, and organize life in a reciprocal, co-constitutive experience that is what Dewey and Bentley term "trans-dermal". A "trans-dermal" experience demands knowledgeable and accurate inquiry into the conditions and consequences of each transaction where the organizing of ideas and acts (knowledge), is itself a transaction which grows out of the problem-solving and creative exploring within the universe of social situations in which we exist. Dewey and Bentley wrote, "truth, or for that matter falsity, is a function of the deliberately striven for consequences arising out of inquiry." Our behavior and acts in knowing, or transacting, must be considered "together" and "at-once" with its conditions and consequences for any ambitious movement or fulfillment to occur alone and among other people in any setting with objects and constructed inherited from others known and unknown over time. Transacting demands study, a slowing down of our movement, and the development of a transactional competence in order to fulfill certain needs or solve problems while functioning among others. In Dewey's final days, wrote Phillips, he emphasized the twin aspects of attending to both the means and the ends of any transaction: "It is…impossible to have an end-in-view or to anticipate the consequences of any proposed line of action." A "trans-dermal" consciousness is, therefore, key to moving ethically. To move, experience life, or transact in a principled manner, considering the reciprocal and co-constituitive nature of organism-environment becomes an object lesson governing both social behavior as well as in transacting from a trans-dermal view with objects or other bodies. Trans-dermal experience The work of Australian educational philosopher Vicki L. Lee further elucidates and breaks down what is "trans-dermal" experience—how it works and why it matters—based on her work in the philosophy of cognitive science, educational philosophy, and radical behaviorism about which she has published extensively. This complex paradigm is clearly evidenced by Lee in this thickly described example:Acts are more than movements. ...Our discriminations depend on movements and their contexts seen together-at-once or as an undifferentiated whole. In discriminating watering the garden from hosing the driveway, we see the bodily movements and their occasion and results. We see the garden, the watering implement, and so forth, as much as we see the body's activities. The notion of together-at-once emphasizes that we do not see movements and contexts separately and then infer the action. Rather the context is internal to the action, because without the context, the action would not be the action it is.A basic presupposition of the philosophy of transactionalism is to always consider that that which is known about the world (extra-dermal) is "directly concerned with the activity of the knower" which is merely from some sense of "skin-boundedness" (intra-dermal). The known and the knower, as Dewey and Bentley examined in detail in their collaborative publication, must always be considered "'twin aspects of common fact." Behavior, movement, and acts are not merely a function of the mind, of wishful or positive thinking or belief in external forces, nor can it be determined ethically from the philosophers of the past or knowledge written in a book. It is our ability to transact trans-dermally—to be and become ecologically-fit as an organism-environment—that begets truthful inquiry into living a good and satisfying life, functioning well among others. Philosophy and Women's Studies Professor Shannon Sullivan explores and applies "transactional knowing through embodied and relational lived experience" as a feminist epistemology developed out of the pragmatist tradition. Politics: cooperation and knowing-as-inquiry The branch of philosophy recognized as "politics" concerns the governance of community and group interaction, not merely the governing over a state or group as conventionally conceived in thoughts about local or national government. Transactionalists view politics as a cooperative, genuine interaction between all participating parties whether buyer-seller, student-teacher, or worker-boss; we are biological as well as social subjects involved not merely in "transacting" for our own advantage or gain but connected to other entities. "[S]ocial phenomena cannot be understood except as there is prior understanding of physical conditions and the laws of their [socio-biological] interactions," wrote John Dewey in Logic: The Theory of Inquiry. Furthermore, he added, "inquiry into [social phenomena], with respect both to data that are significant and to their relations or proper ordering, is conditioned upon extensive prior knowledge of physical phenomena and their laws. This fact accounts in part for the retarded and immature state of social subjects." Thus, cooperation and knowing as inquiry is foundational to governing communal affairs of any kind including economic trade and our educative process. In Laws of Motion (1920), physicist James Clerk Maxwell articulated the modern conception of "transaction" (or trans-action) used here. His conception is not exclusive to an economic context or limited to the opposition of a buyer-seller in trade or some analogous situation. Unlike commercial affairs, there is a radical departure from any tendency to perceive buyer-seller (in an organism-environment paradigm) as if they are opposing or separate forces. Transactionalists like Maxwell view the buyer and seller as "two parts [or aspects] of the same phenomenon." Dewey and Bentley apply this 'transactional' view to the domain of learning more than any other context. Referred to as the educative process, acting without knowing (described below) often sets up the separation or fracturing of the enjoined phenomenon (e.g., knowing is doing, organizing the mental or physical acts in a pragmatic way). Without knowing-as-inquiry, blindly acting as an organism in an environment often does not work with the exception of beginner's luck. Acting to understand knowing elicits pragmatic knowledge of functioning as an organism-environment; both knowing and acting must essentially involve inquiry into things that have happened and are happening in order to challenge assumptions and expectations which may be wrong in some context: Knowledge – if the term is to be employed at all – is a name for the product of competent inquiries, and is constituted only as the outcome of a particular inquiry.From the constitutive process of knowing and doing, knowledge is more than "a process taking place" or some "status" located in an organism's [of person's] mind. Knowledge arises from inquiry. It arises out of a kind of testing, an iterative process of inquiry into what we know and expect, that ensures a suitable fitness not only in solving problems (finding a solution). It ensures the fitness of the organism-environment, which may vary depending on the situation, the time and place, or the culture. While a person is central (or "nuclear" as in a nucleus) to a conception of organism-environment, human beings as organisms must abdicate any sense of dominion over their social-biological cosmos. Being human is but a part, and never outside, that cosmos or environment which they need to survive and they need to adapt to, to thrive. Each situation and assumptions about it—and this transactionalists assert is radical way of thinking—must be tested, examined, and determined by a series of iterative moves and activity based on the capacity of that organism's ability to fulfill its desired intentions to eventually thrive (or not). Dewey and Bentley later insisted that knowing "as inquiry, [is therefore] a way, or distinct form, of behavior," out of which a transactional competence is achieved. In our existing models of formal education, we bifurcate what Dewey viewed as indispensable. We, as a rule, segregate "utility and culture, absorption and expression, theory and practice....in any educational scheme" In 1952, progressive educator Elsie Ripley Clapp distinguished a similar commitment to a "cooperative transaction of inquiry" in a vision of education that enjoined those in a community and those inside a school. Intelligence—that which is acquired through knowledgeable inquiry and mental testing—allows man to analyze and foresee consequences derived from the past experiences shaping our biases and expectations. Without intelligence of this kind, one is unlikely to control his/her actions without preconceived dogma, rites, or beliefs that might be wrong without a proper inquiry. If the philosophical study of politics were actually considered a "study of force," transactionalists would assert that knowing "what actions are permissible" (or not) given the condition of being an organism-environment, then co-operation and knowing-as-inquiry into one's bodily condition and conditioning and the situation one is transacting in that conditions one's body, all this is vital to functioning successfully among others in any social situation or environment. In the Stanford Encyclopedia of Philosophy, it is noted that John Dewey was critical of the classical neoliberal stance that abstracts the individual from environment as if the individual precedes or lords from outside of a conception of society or social institutions. Dewey maintained that social institutions were not a "means for obtaining something for individuals. They are means for 'creating' individuals in a co-operative inquiry into knowing how to live a satisfying life (Reconstruction in Philosophy, MW12, 190–192)." [C]lassical liberalism treats the individual as 'something given.' Instead, Dewey argues, 'liberalism knows that an individual is nothing fixed, given ready-made. It is something achieved, and achieved not in isolation but with the aid and support of conditions, cultural and physical: — including in "cultural", economic, legal and political institutions as well as science and art' ('The Future of Liberalism', LW11: 291).For Dewey, such treatment is 'the most pervasive fallacy of philosophical thinking' ('Context and Thought', LW5, 5). Transactionalism is a radical form of governing one's self in one's environment(s). Transactionalism resists a political tendency to "divide up experienced phenomena, and to take the distinct analysed elements to be separate existences, independent both of the analysis and of each other." Intelligent thinking is anti-dualistic, accurate, forethought. It takes into account other people, communities, and cultures. It stems from a "deliberate control of what is done with reference to making what happens to us and what we do to things as fertile as possible of suggestions (of suggested meanings)." [emphasis added] Furthermore, intelligent thinking is a means for trying out the validity of those suggestions and other assumptions. The political governing of thinking towards dualisms and bifurcation as well as the "false conception of the individual" (apart from their environment) is what Dewey argued actually limits man's free (meaning "liberal") thought and action. All of this served as the core reasoning behind Dewey's development of an experimental philosophy that offset elite distortions of public education and learning. Individual as co-constitutive, organism-environment Transactionalist psychologists and educational philosophers reject the ideologies precipitated from Western ideologies of do-it-yourself or the phrase If it is to be, it's up to me! Such mentalities tend to lead to entitlement. The naiveté of slogans like "follow your passion" often deny any consideration of our trans-dermal condition—our internal fitness and the external fitness of who we are as organism-environment. Transactionalists assert that the "advancing conformity and coercive competition so characteristic of our times" demands reassessment. A new "philosophical-psychological complex" is offered that confronts the "ever increasing growth of bureaucratic rule and the attendant rise of a complacent citizenry." Given the intensification of globalization and migration, a trans-dermal consciousness allows for a transactional emphasis on "human dignity and uniqueness" despite "a matrix of anxiety and despair [and] feelings of alienation." Transactionalist psychologists and philosophers replace a once sought-after existentialism as a remedy to feelings of alienation with a trans-dermal, organism-environment orientation to living. Rather than applying a theory or approach that emphasizes the individual as a "free and responsible agent determining their own development through acts of the will," subjects are invited to co-create functioning among all other organism-environments, including the specific conditions and consequences of any objects and personalities involved, in order to intelligently structure existence in and among it all. The very act of participating in co-creation, according to transactionalists, gives and allows each person her/his unique status and dignity in their environment. Aesthetics: value-satisfaction from an assumptive world Distinct from an aesthetic theory of taste or a rationale for the beauty in an object of art, a transactionalist theory of aesthetics concerns the perceptual judgments we use to define value, purposeful activity or satisfaction in any experience. Based on studies by transactionalist psychologists Adelbert Ames, Jr. (known for The Ames Demonstrations), William Howard Ittelson, Hadley Cantril, along with John Dewey, the biological role of perception is key to understanding transactionalism. Perceiving is viewed as "part of the process of living by which each one of us, from his own particular point of view, creates for himself the world within which he has his life's experiences and through which he strives to gain his satisfactions." The sum total of these assumptions was recognized as the "assumptive world." The assumptive world stems from all that we experience, all the things and events we assess and assign meaning to, which function as a contextual whole also known as a transactional whole. Dewey also referred to the assumptive world as a "situation" (where organism and environment are inseparable) or as a "field" in which behavior, stimulus, and response are framed as if a reflexive circuit. Trevor Phillips noted, "To the modern transactionalist, experiences alter perceptual processes, and in the act of altering them, the purposing aspect of perception is either furthered or its fulfillment interfered with." It is through action, through movement, that man is capable of bringing forth a value-satisfaction—the perception of satisfying an aim or outcome—to her or his experience. Man's capacity to "sense value in the quality of his experience" was registered through his serial expectations and standards stemming from previous transactions throughout life. A theory of value is therefore derived from one's behavioral inquiry within an assumptive world. "Knowledge is a transaction that develops out of man's explorations within [that] cosmos." Transactionalists reject the notion that any truth is inherently settled or beyond question. The consequences of any inquiry will be dependent on the situation or transactional whole in which man as an organism-environment finds him- or her-self. Since our body and the physical environments and social ecologies in which it trans-acts are continually in flux across time and space, a singular or repetitive assumption carried over in an unthinking manner may not be valuable or satisfactory. To clarify the theory of valuation, John Dewey wrote: To declare something satisfactory [vs. satisfying] is to assert that it meets specifiable conditions. It is, in effect, a judgment that the thing 'will do'. It involves a prediction; it contemplates a future in which the thing will continue to serve; it will do. It asserts a consequence the thing will actively institute, it will do." Ultimately, transactionalism is a move away from the conclusion that knowledge depends on an independent knower and something to be known. The reality of a particular situation depends, transactionally speaking, on the interpretation place[d] upon the situation by a particular person. Interpretation is possible only through the accumulation of experience which, in effect, is what is meant by "assumptive world". Without the hitches and mistakes one encounters in the welter of daily living, the nature of the assumptive world would never arise into consciousness. The assumptive world, initially highlighted in the 25 experiments in perception known as "The Ames demonstrations," becomes the seeming reality of our world. Man's transactions of living involve, in sum, capacities and aspects of his nature operating together. To transact is to participate in the process of translating the ongoing energies of the environment into one's own perceptual awareness, and to transform the environment through the perceptual act. Value-satisfaction arises when the inadequacies of man's assumptive world are revealed or invalidated. Thereby, the consequences of any transactional experience determines what is valuable or what will do vs. that which is satisfying but will not do. The good life, for the transactionalist, consists of a unity of values, achieved by means of reflective thought, and accepted in the full light of their conditions and consequences. To transact is to act intelligently with an aim in mind while avoiding the tendency to surrender one's awareness to complacency or indifference that stems from mere information or untested knowledge. Without action, a person can fool herself, distort her sense of satisfaction or value on behalf of consequences she or others prefer. Through action, the individual perceptions as well as the shared perceptual common sense of an assumptive world are validated and modified. We predict and refine our conditions of life yet "any standard set for these value qualities is influenced by the individual's personal biological and life history." Transactionalism is a creative process that takes into account the unique biology and biography of persons involved. Generational significance The importance of the study of transactionalism arose in the late 1960s in response to an "alienation syndrome" among youth of that generation. As the counter-culture challenged and reassessed society's "philosophical-psychological complex, its Weltanschauung," their political and social alienation sparked protests against the war and the draft as well as historic racial rebellions in various U.S. cities. The Long hot summer of 1967 and the counterculture movement named the Summer of Love also in 1967 reflected the antipathy of young people who questioned everything. American society's norms and values were perceived as denying dignity to all. Riots of the period were studied in a report by the U.S. Kerner Commission and scholars began to study the patterns of alienation expressed by youth in the sixties. Youth sought a kind of existentialism expressed by a need to be "true to oneself." This current of alienation unfortunately veered away from a relevant understanding of the transactional whole taking into account the reciprocal and co-constitutive nature of man as an organism-environment fulfilling important conditions of life with others all the time. It resembles the famous line from Devotions upon Emergent Occasions, written by English poet John Donne – "No man is an island". Transactionalism presented an alternative to the limitation and unintended outcomes of the alienation syndrome. Benefits and applications Designed to account for all aspects of experience—subjective and objective—transactionalism requires a slowing down in assessing all the facts involved with the how, what, when, where, and why as we move to transact with others. It demands and requires always considering how a transaction with another and one's self (e.g., a parent or spouse spending additional hours socializing at the gym) is or is not beneficial to all involved in a transaction (e.g., other members of the family). The costs may be in time, attention, or money or in a condition of life (e.g., family, career, sleep). Transactionalism requires an interdependence of thought, study, and action. A transactionalist must account for one's biology and cognition (metaphysics); the ways knowing reality (epistemology); the reciprocal, co-constitutive, relationship (or ethics) between our social self and the interactions constrained by both our natural and human-made environment. We as human beings live in distinct sociological patterns with people, material and immaterial culture shaped by specific and ever-changing times and places further articulated by increasing migration and globalization. Transactionalism insists that one attend to the political distribution of goods and services along with the ways its value has and is exchanged and changing among people and groups (politics) as well as how persons are socialized to understand what it means to live a good life as well as fulfill those conditions over time (aesthetics). Transactionalism offers more than existentialism offered with its aim of being "true to oneself." The alienation that results from its orientation to the self at the expense of societal norms and values, even in small groups, often leads to naiveté, despair, frustration, agitation, and even indifference, at the expense of consciously organizing one's acts, while functioning among others, to fulfill one's unique and necessary interests in living a good and satisfying life. Transactionalism counters the naive "do as I see fit" mentality of authenticity regardless of other's needs and concerns, which inevitably leads to negative consequences and outcomes over time. Transactionalism depends upon the "integration of man and his surroundings." Phillips' dissertation documented the evolution of a "transactional approach;" one that rests on the fact that we are biological, linguistic, and that we must transact considering a trans-dermal experience of our thoughts, behavior, and exchange on every level imagined while ethically functioning with others well. A series of podcasts exemplify the application of a transactional approach to a diverse array of professionals from various countries. See also Hilary Putnam References Philosophical theories Education theory
0.775054
0.973719
0.754685
Community mobilization
Community mobilization is an attempt to bring both human and non-human resources together to undertake developmental activities in order to achieve sustainable development. Process Community mobilization is a process through which action is stimulated by a community itself, or by others, that is planned, carried out, and evaluated by a community's individuals, groups, and organizations on a participatory and sustained basis to improve the health, hygiene and education levels so as to enhance the overall standard of living in the community. A group of people have transcended their differences to meet on equal terms in order to facilitate a participatory decision-making process. In other words, it can be viewed as a process which begins a dialogue among members of the community to determine who, what, and how issues are decided, and also to provide an avenue for everyone to participate in decisions that affect their lives. Requirements Community mobilization needs many analytical and supportive resources which are internal (inside the community) and external (outside the community) as well. Resources include: Leadership Organizational capacity Communications channels Assessments Problem solving Resource mobilization Administrative and operational management Strategies The Centre for Disease Control envisions that strong healthcare initiatives will be readily owned by a community if the leaders ("grass tops"), the citizens ("grass roots"), and youth are fully engaged in mobilizing the community, educating stakeholders, and implementing evidence-based interventions. To this respect, 14 strategies guided by best practice have been reported (Huberman 2014): 1. Secure strong leadership 2. Establish a formal structure 3. Engage diverse organizations, community leaders, and residents 4. Ensure authentic participation and shared decision making 5.ensure authentic and productive roles for young people 6. Develop a shared vision 7. Conduct a needs assessment 8. Create a strategic plan 9. Implement mutually reinforcing strategies 10. Create a fundraising strategy 11. Establish effective channels for internal communication 12. Educate the community 13. Conduct process and outcome evaluations 14. Evaluate the community mobilization effort separately Implications "Community mobilization" is a frequently used term in developmental sector. Recently, community mobilization has been proved to be a valuable and effective concept which has various implications in dealing with basic problems like health and hygiene, population, pollution and gender bias. References Community organizing Community
0.772957
0.976347
0.754674
Integral humanism (India)
Integral humanism was a set of concepts drafted by Deendayal Upadhyaya as a political program and adopted in 1965 as the official doctrine of the Jan Sangh and later BJP. The doctrine is also interpreted as 'Universal Brotherhood', an earlier theosophist and inturn Freemason inspired phenomenon. Upadhyaya borrowed the Gandhian principles such as sarvodaya (progress of all), swadeshi (domestic), and Gram Swaraj (village self rule) and these principles were appropriated selectively to give more importance to cultural-national values. These values were based on an individual's undisputed subservience to nation as a corporate entity. The creation and adoption of these concepts helped to suit the major discourses in the Indian political arena of 1960s and 1970s. This highlighted efforts to portray the Jan Sangh and Hindu nationalist movement as a high profile right fringe of the Indian political mainstream. A major change here in compared to Golwalkar's works was the use of the word "Bhartiya" which Richard Fox had translated as "Hindian", combination of Hindu Indian. Due to the official secularism in politics, it had become impossible to invoke explicit reference to "Hindu" and the usage of the word Bhartiya allowed to circumvent this political reality. Upadhyaya considered that it was of utmost importance for India to develop an indigenous economic model with the human being at center stage. This approach made this concept different from Socialism and Capitalism. Integral Humanism was adopted as Jan Sangh's political doctrine and its new openness to other opposition forces made it possible for the Hindu nationalist movement to have an alliance in the early 1970s with the prominent Gandhian Sarvodaya movement going on under the leadership of J. P. Narayan. This was considered as the first major public breakthrough for the Hindu nationalist movement. Philosophy According to Pandit Deendayal Upadhyaya, the primary concern in India should be to develop an indigenous development model that has human beings as its core focus. It is opposed to both western capitalist individualism and Marxist socialism, though welcoming to western science. It seeks a middle ground between capitalism and socialism, evaluating both systems on their respective merits, while being critical of their excesses and alienness. Four objectives of humankind Humankind, according to Upadhyaya, had four hierarchically organized attributes of body, mind, intellect and soul which corresponded to the four universal objectives of dharma (moral duties), artha (wealth), kama (desire or satisfaction), and moksha (total liberation or 'salvation'). While none could be ignored, dharma is the 'basic', and moksha the 'ultimate' objective of humankind and society. He claimed that the problem with both capitalist and socialist ideologies is that they only consider the needs of body and mind, and were hence based on the materialist objectives of desire and wealth. Rejection of individualism Upadhyaya rejected social systems in which individualism 'reigned supreme'. He also rejected communism in which individualism was 'crushed' as part of a 'large heartless machine'. Society, according to Upadhyaya, rather than arising from a social contract between individuals, was fully born at its inception itself as a natural living organism with a definitive 'national soul' or 'ethos' and its needs of the social organism paralleled those of the individual. Origins Advaita Vedanta Upadhyaya was of the opinion that Integral Humanism followed the tradition of advaita developed by Adi Sankara. Nondualism represented the unifying principle of every object in the universe, and of which humankind was a part. This, claimed Upadhyaya, was the essence and contribution of Indian culture. Mahatma Gandhi Integral humanism is almost an exact paraphrase of Mahatma Gandhi's vision of a future India. Both seek a distinctive path for India, both reject the materialism of socialism and capitalism alike, both reject the individualism of modern society in favor of a holistic, varna-dharma based community, both insist upon an infusion of religious and moral values in politics, and both seek a culturally authentic mode of modernization that preserves Hindu values. Integral humanism contains visions organized around two themes: morality in politics and swadeshi, and small-scale industrialization in economies, all Gandhian in their general thematic but distinctly Hindu nationalist. These notions revolve around the basic themes of harmony, primacy of cultural-national values, and discipline. Contrast with Nehruvian economic policies Upadhyaya rejects Nehruvian economic policies and industrialization on the grounds that they were borrowed uncritically from the West, in disregard of the cultural and spiritual heritage of the country. There is a need, according to Upadhyaya, to strike a balance between the Indian and Western thinking in view of the dynamic nature of the society and the cultural heritage of the country. The Nehruvian model of economic development, emphasizing the increase of material wealth through rapid industrialization, promoted consumerism in Indian society. Not only has this ideology of development created social disparities and regional imbalances in economic growth, but it has failed to alleviate poverty in the country. The philosophy of integral humanism, like Gandhism, opposes unbridled consumerism, since such an ideology is alien to Indian culture. This traditional culture stresses putting restraints on one's desires and advocates spiritual contentment rather than ruthless pursuit of material wealth. See also Integral humanism (Maritain) Integralism Traditionalist conservatism Hindu nationalism Hindutva References Sources Alt URL Further reading Two Extracts from Integral Humanism from External links Philosophy of Integral Humanism at the website of BJP (English) Hindu philosophical concepts Indian political philosophy Political ideologies Hindu nationalism Bharatiya Janata Party Religious humanism Political science terminology Bharatiya Jana Sangh Philosophy of culture
0.760981
0.991645
0.754623
Anglocentrism
Anglocentrism refers to the practice of viewing the world primarily through the lens of English or Anglo-American culture, language, and values, often marginalizing or disregarding non-English-speaking or non-Anglo perspectives. This term is used to describe a bias that elevates English-speaking countries and their viewpoints over others, particularly in global discourse, education, media, and politics. Historically, Anglocentrism emerged alongside British imperialism, where British norms and values were exported globally through colonization. In modern contexts, it often manifests in the dominance of the English language in international communication, academia, and business, with English-speaking countries (especially the United States and the United Kingdom) setting standards in many fields. Critics of Anglocentrism argue that it fosters cultural homogenization and erases the diversity of global voices. In educational settings, for example, Anglocentric curriculums may overlook non-Western knowledge systems or cultural contributions. Moreover, in media and politics, the prominence of English-speaking narratives may limit the representation of non-Anglo cultures and experiences. As global interconnectedness grows, awareness of Anglocentrism and its effects has led to efforts to promote linguistic and cultural pluralism in international institutions and discourse.effects has led to efforts to promote linguistic and cultural pluralism in international institutions and discourse. References Eurocentrism British Empire Commonwealth of Nations Historical regions
0.792842
0.951768
0.754601
Domain knowledge
Domain knowledge is knowledge of a specific discipline or field in contrast to general (or domain-independent) knowledge. The term is often used in reference to a more general discipline—for example, in describing a software engineer who has general knowledge of computer programming as well as domain knowledge about developing programs for a particular industry. People with domain knowledge are often regarded as specialists or experts in their field. Knowledge capture In software engineering, domain knowledge is knowledge about the environment in which the target system operates, for example, software agents. Domain knowledge usually must be learned from software users in the domain (as domain specialists/experts), rather than from software developers. It may include user workflows, data pipelines, business policies, configurations and constraints and is crucial in the development of a software application. Expert domain knowledge (frequently informal and ill-structured) is transformed in computer programs and active data, for example in a set of rules in knowledge bases, by knowledge engineers. Communicating between end-users and software developers is often difficult. They must find a common language to communicate in. Developing enough shared vocabulary to communicate can often take a while. The same knowledge can be included in different domain knowledge. Knowledge which may be applicable across a number of domains is called domain-independent knowledge, for example logic and mathematics. Operations on domain knowledge are performed by metaknowledge. See also Artificial intelligence Domain (software engineering) Domain engineering Domain of discourse Knowledge engineering Subject-matter expert Literature Hjørland, B. & Albrechtsen, H. (1995). Toward A New Horizon in Information Science: Domain Analysis. Journal of the American Society for Information Science, 1995, 46(6), p. 400–425. Knowledge engineering
0.768881
0.98141
0.754587
Peripeteia
Peripeteia (alternative Latin form: Peripetīa, ultimately from ) is a reversal of circumstances, or turning point. The term is primarily used with reference to works of literature; its anglicized form is peripety. Aristotle's view Aristotle, in his Poetics, defines peripeteia as "a change by which the action veers round to its opposite, subject always to our rule of probability or necessity." According to Aristotle, peripeteia, along with discovery, is the most effective when it comes to drama, particularly in a tragedy. He wrote that "The finest form of Discovery is one attended by Peripeteia, like that which goes with the Discovery in Oedipus...". Aristotle says that peripeteia is the most powerful part of a plot in a tragedy along with discovery. A peripety is the change of the kind described from one state of things within the play to its opposite, and that too in the way we are saying, in the probable or necessary sequence of events. There is often no element like Peripeteia; it can bring forth or result in terror, mercy, or in comedies it can bring a smile or it can bring forth tears (Rizo). This is the best way to spark and maintain attention throughout the various form and genres of drama "Tragedy imitates good actions and, thereby, measures and depicts the well-being of its protagonist. But in his formal definition, as well as throughout the Poetics, Aristotle emphasizes that" ... Tragedy is an imitation not only of a complete action, but also of events inspiring fear or pity" (1452a 1); in fact, at one point Aristotle isolates the imitation of "actions that excite pity and fear" as "the distinctive mark of tragic imitation" (1452b 30). Pity and fear are effected through reversal and recognition; and these "most powerful elements of emotional interest in Tragedy-Peripeteia or Reversal of the Situation, and recognition scenes-are parts of the plot (1450a 32). has the shift of the tragic protagonist's fortune from good to bad, which is essential to the plot of a tragedy. It is often an ironic twist. Good uses of Peripeteia are those that especially are parts of a complex plot, so that they are defined by their changes of fortune being accompanied by reversal, recognition, or both" (Smithson). Peripeteia includes changes of character, but also more external changes. A character who becomes rich and famous from poverty and obscurity has undergone peripeteia, even if his character remains the same. When a character learns something he had been previously ignorant of, this is normally distinguished from peripeteia as anagnorisis or discovery, a distinction derived from Aristotle's work. Aristotle considers anagnorisis, leading to peripeteia, the mark of a superior tragedy. Two such plays are Oedipus Rex, where the oracle's information that Oedipus has killed his father and married his mother brings about his mother's death and his own blindness and exile, and Iphigenia in Tauris, where Iphigenia realizes that the strangers she is to sacrifice are her brother and his friend, resulting in all three of them escaping Tauris. These plots he considered complex and superior to simple plots without anagnorisis or peripeteia, such as when Medea resolves to kill her children, knows they are her children, and does so. Aristotle identified Oedipus Rex as the principal work demonstrating peripety. (See Aristotle's Poetics.) Examples Oedipus Rex In Sophocles' Oedipus Rex, the peripeteia occurs towards the end of the play when the Messenger brings Oedipus news of his parentage. In the play, Oedipus is fated to murder his father and marry his mother. His parents, Laius and Jocasta, try to forestall the oracle by sending their son away to be killed, but he is actually raised by Polybus and his wife, Merope, the rulers of another kingdom. The irony of the Messenger’s information is that it was supposed to comfort Oedipus and assure him that he was the son of Polybus. Unfortunately for Oedipus, the Messenger says, "Polybus was nothing to you, [Oedipus] that’s why, not in blood" (Sophocles 1113). The Messenger received Oedipus from one of Laius’ servants and then gave him to Polybus. The plot comes together when Oedipus realizes that he is the son and murderer of Laius as well as the son and husband of Jocasta. Martin M. Winkler says that here, peripeteia and anagnorisis occur at the same time "for the greatest possible impact" because Oedipus has been "struck a blow from above, as if by fate or the gods. He is changing from the mighty and somewhat arrogant king of Thebes to a figure of woe" (Winkler 57). Conversion of Paul on the road to Damascus The instantaneous conversion of Paul on the road to Damascus is a classic example of peripeteia, which Eusebius presented in his Life of Constantine as a pattern for the equally revelatory conversion of Constantine. Modern biographers of Constantine see his conversion less as a momentary phenomenon than as a step in a lifelong process. The Three Apples In "The Three Apples", a medieval Arabian Nights, after the murderer reveals himself near the middle of the story, he explains his reasons behind the murder in a flashback, which begins with him going on a journey to find three rare apples for his wife, but after returning finds out she cannot eat them due to her lingering illness. Later at work, he sees a slave passing by with one of those apples claiming that he received it from his girlfriend, a married woman with three such apples her husband gave her. He returns home and demands his wife to show him all three apples, but she only shows him two. This convinces him of her infidelity and he murders her as a result. After he disposes of her body, he returns home, where his son confesses that he had stolen one of the apples and that a slave, to whom he had told about his father's journey, had fled with it. The murderer thus realizes his guilt and regrets what he has just done. The second use of peripety occurs near the end. After finding out about the culprit behind the murder, the protagonist Ja'far ibn Yahya is ordered by Harun al-Rashid to find the tricky slave within three days, or else he will have Ja'far executed instead. After the deadline has passed, Ja'far prepares to be executed for his failure and bids his family farewell. As he hugs his youngest daughter, he feels a round object in her pocket, which is revealed to be the same apple that the culprit was holding. In the story's twist ending, the daughter reveals that she obtained it from their slave, Rayhan. Ja'far thus realizes that his own slave was the culprit all along. He then finds Rayhan and solves the case, preventing his own execution. That was a plot twist. See also Deus ex machina Notes Further reading Aristotle, Poetics, trans. Ingram Bywater; Modern Library College Editions, New York, 1984. Finlayson, James G., "Conflict and Reconciliation in Hegel's Theory of the Tragic", Journal of the History of Philosophy 37 (1999); pp. 493–520. Lucas, F. L., "The Reverse of Aristotle" (an essay on peripeteia), Classical Review, Vol. XXXVII Nos 5,6; Aug.–Sept. 1923; pp. 98–104. Rizo, Juan Pablo Mártir, Poetica de Aristoteles traducida de Latin; M. Newels Elias L. Rivers MLN, Vol. 82, No. 5, General Issue. (Dec., 1967), pp. 642–643 Silk, M. S., Tragedy and the Tragic: Greek Theatre and Beyond; Oxford, 1998; pp. 377–380. Smithson, Isaiah, Journal of the History of Ideas, Vol. 44, No. 1. (Jan. - Mar., 1983), pp. 3–17. Sophocles, Oedipus the King, in The Three Theban Plays, trans. Robert Fagles; Comp. Bernard Knox; New York: Penguin, 1982. Winkler, Martin M., Oedipus in the Cinema, Arethusa, 2008; pp. 67–94. External links Britannica Online Encyclopedia F. L. Lucas, "The Reverse of Aristotle": a discussion of Peripeteia (Classical Review, August–September 1922) Clifford Leech, Tragedy Ancient Greek theatre Narratology Plot (narrative) Poetics Greek words and phrases Concepts in ancient Greek aesthetics
0.761162
0.991357
0.754583
Toyota Kata
Toyota Kata is a management book by Mike Rother. The book explains the Improvement Kata and Coaching Kata, which are a means for making the continual improvement process as observed at the Toyota Production System teachable. Overview Toyota Kata defines management as, “the systematic pursuit of desired conditions by utilizing human capabilities in a concerted way.” Rother proposes that it is not solutions themselves that provide sustained competitive advantage and long-term survival, but the degree to which an organization has mastered an effective routine for developing fitting solutions again and again, along unpredictable paths. This requires teaching the skills behind the solution. In this management approach a primary job of leaders and managers is to develop people so that desired results can be achieved. They do this by having the organization members (leaders and managers included) deliberately practice a routine, or kata, that develops and channels their creative abilities. Kata are patterns that are practiced so they become second nature, and were originally movement sequences in martial arts. Two major components of a kata are a Coaching Kata and an Improvement Kata. The Coaching Kata helps to develop skill in supporting learners — as the learner practices an Improvement Kata. Without coaching, a learner may not practice correctly or ineffectively, and a change is less likely. But because coaching itself is a skill that requires practice, the Coaching Kata helps to develop the skills to support a learner practicing an Improvement Kata. In other word, the Coaching Kata is practiced by managers, supervisors and team leaders who want to coach their learners in a scientific way of thinking and acting. The Improvement Kata The Improvement Kata is a routine for moving from the current situation to a new situation in a creative, directed, meaningful way. It is based on a four-part model: In consideration of a vision or direction... Grasp the current condition. Define the next target condition. Move toward that target condition iteratively, which uncovers obstacles that need to be worked on. In contrast to approaches that attempt to predict the path and focus on implementation, the Improvement Kata builds on discovery that occurs along the way. Teams using the Improvement Kata learn as they strive to reach a target condition, and adapt based on what they are learning. Toyota Kata submits that the Improvement Kata pattern of thinking and behavior is universal; applicable not only in business but in education, politics, daily living, etc.. The book's underlying message is that when people practice and learn a kata for how to proceed through unclear territory, they don't need to fear the obstacles, changes and unknowns they encounter. Rather than trying to hold on to a sense of certainty based on one's perspective, people can derive confidence from a kata for working through uncertainty. The notion of Improvement Kata as a process of continuous improvement contributed also to the formation of the DevOps movement. The Coaching Kata The Coaching Kata is a routine used to teach the improvement kata and requires prior experience with practicing the Improvement Kata. The teaching approach embodied in the improvement kata within the Toyota model emphasizes a continuous, respectful mentor-mentee relationship. This method is centered around several key principles: Ongoing Coaching Necessity Foundation of Respect and TrustUniversal Coaching System High Skill Standards for Coaches Importance of A3 Thinking Role of the Coach Self-Directed Solution Finding Accountability of Results This method showcases a patient, respectful, and immersive learning culture, where continuous improvement is not just a goal but a journey of collaborative growth and shared responsibility. References External links Dave Moran, Book review, Software Results, 23 September 2011 Dirk Dusharme, A Kata for Developing Solutions – interview with Mike Rother, QualityDigest.com, 10 June 2010 Kata Practitioner Days KPD, Open source recommendations for running a KPD – sample agendas and promotion ideas KataCon, Annual Gathering of Kata Proponents Business books 2009 non-fiction books Toyota Production System
0.778567
0.969178
0.75457
Democracy and Education
Democracy and Education: An Introduction to the Philosophy of Education is a 1916 book by John Dewey. Synopsis In Democracy and Education, Dewey argues that the primary ineluctable facts of the birth and death of each one of the constituent members in a social group determine the necessity of education. On one hand, there is the contrast between the immaturity of the new-born members of the group (its future sole representatives) and the maturity of the adult members who possess the knowledge and customs of the group. On the other hand, there is the necessity that these immature members be not merely physically preserved in adequate numbers, but that they be initiated into the interests, purposes, information, skill, and practices of the mature members: otherwise the group will cease its characteristic life. Dewey observes that even in a "savage" tribe, the achievements of adults are far beyond what the immature members would be capable of if left to themselves. With the growth of civilization, the gap between the original capacities of the immature and the standards and customs of the elders increases. Mere physical growing up and mastery of the bare necessities of subsistence will not suffice to reproduce the life of the group. Deliberate effort and the taking of thoughtful pains are required. Beings who are born not only unaware of, but quite indifferent to, the aims and habits of the social group have to be rendered cognizant of them and actively interested. According to Dewey, education, and education alone, spans the gap. Reception Dewey's ideas were never broadly and deeply integrated into the practices of American public schools, though some of his values and terms were widespread. In the post-Cold War period, however, progressive education had reemerged in many school reform and education theory circles as a thriving field of inquiry learning and inquiry-based science. Some find it cumbersome that Dewey's philosophical anthropology, unlike Egan, Vico, Ernst Cassirer, Claude Lévi-Strauss, and Nietzsche, does not account for the origin of thought of the modern mind in the aesthetic, more precisely the myth, but instead in the original occupations and industries of ancient people, and eventually in the history of science. A criticism of this approach is that it does not account for the origin of cultural institutions, which can be accounted for by the aesthetic. Language and its development, in Dewey's philosophical anthropology, have not a central role but are instead a consequence of the cognitive capacity. Legacy While Dewey's educational theories have enjoyed a broad popularity during his lifetime and after, they have a troubled history of implementation. Dewey's writings can also be difficult to read, and his tendency to reuse commonplace words and phrases to express extremely complex reinterpretations of them makes him susceptible to misunderstanding. So while he held the role of a leading public intellectual, he was often misinterpreted, even by fellow academics. Many enthusiastically embraced what they mistook for Dewey's philosophy, but which in fact bore little or a distorted resemblance to it. Simultaneously, other progressive educational theories, often influenced by Dewey but not directly derived from him, were also becoming popular, such as Educational perennialism which is teacher-centered as opposed to student-centered. The term 'progressive education' grew to encompass numerous contradictory theories and practices, as documented by historians like Herbert Kliebard. Several versions of progressive education succeeded in transforming the educational landscape: the utter ubiquity of guidance counseling, to name but one example, springs from the progressive period. Radical variations of educational progressivism were troubled and short-lived, a fact that supports some understandings of the notion of failure. But they were perhaps too rare and ill-funded to constitute a thorough test. See also List of publications by John Dewey References External links Democracy and Education: An Introduction to the Philosophy of Education public domain book at Wikisource 1916 non-fiction books Works by John Dewey Books about the philosophy of education
0.769894
0.980075
0.754555
News values
News values are "criteria that influence the selection and presentation of events as published news." These values help explain what makes something "newsworthy." News values are not universal and can vary between different cultures. Among the many lists of news values that have been drawn up by scholars and journalists, some attempt to describe news practices across cultures, while others have become remarkably specific to the press of particular (often Western) nations. In the Western tradition, decisions on the selection and prioritization of news are made by editors on the basis of their experience and intuition, although analysis by Galtung and Ruge showed that several factors are consistently applied across a range of news organizations. Their theory was tested on the news presented in four different Norwegian newspapers from the Congo and Cuban crisis of July 1960 and the Cyprus crisis of March–April 1964. Results were mainly consistent with their theory and hypotheses. Johan Galtung later said that the media have misconstrued his work and become far too negative, sensational, and adversarial. Methodologically and conceptually, news values can be approached from four different perspectives: material (focusing on the material reality of events), cognitive (focusing on people's beliefs and value systems), social (focusing on journalistic practice), and discursive (focusing on the discourse). A discursive perspective tries to systematically examine how news values such as Negativity, Proximity, Eliteness, and others, are constructed through words and images in published news stories. This approach is influenced by linguistics and social semiotics, and is called "discursive news values analysis" (DNVA). It focuses on the "distortion" step in Galtung and Ruge's chain of news communication, by analysing how events are discursively constructed as newsworthy. History Initially labelled "news factors," news values are widely credited to Johan Galtung and Mari Holmboe Ruge. In their seminal 1965 study, Galtung and Ruge put forward a system of twelve factors describing events that together are used as defining "newsworthiness." Focusing on newspapers and broadcast news, Galtung and Ruge devised a list describing what they believed were significant contributing factors as to how the news is constructed. They proposed a "chain of news communication," which involves processes of selection (the more an event satisfies the "news factors," the more likely it is selected as news), distortion (accentuating the newsworthy factors of the event, once it has been selected), and replication (selection and distortion are repeated at all steps in the chain from event to reader). Furthermore, three basic hypotheses are presented by Galtung and Ruge: the additivity hypothesis that the more factors an event satisfies, the higher the probability that it becomes news; the complementary hypothesis that the factors will tend to exclude each other; and the exclusion hypothesis that events that satisfy none or very few factors will not become news. In 2001, the influential 1965 study was updated by Tony Harcup and Deirdre O'Neill, in a study of the British press. The findings of a content analysis of three major national newspapers in the UK were used to critically evaluate Galtung and Ruge's original criteria and to propose a contemporary set of news values. Forty years on, they found some notable differences, including the rise of celebrity news and that good news (as well as bad news) was a significant news value, as well as the newspaper's own agenda. They examined three tabloid newspapers. Contemporary news values In a rapidly evolving market, achieving relevance, giving audiences the news they want and find interesting, is an increasingly important goal for media outlets seeking to maintain market share. This has made news organizations more open to audience input and feedback, and forced them to adopt and apply news values that attract and keep audiences. Given these changes and the rapid rise of digital technology in recent years, Harcup and O’Neill updated their 2001 study in 2016, while other scholars have analysed news values in viral news shared via social media. The growth of interactive media and citizen journalism is fast altering the traditional distinction between news producer and passive audience and may in future lead to a redefinition of what "news" means and the role of the news industry. List of news values A variety of external and internal pressures influence journalistic decisions during the news-making process, which can sometimes lead to bias or unethical reporting. Many different factors have the potential to influence whether an event is first noticed by a news organisation, second whether a story will be written about that event, third, how that story is written, and fourth whether this story will end up being published as news and if so, where it is placed. Therefore, "there is no end to lists of news criteria." There are multiple competing lists of news values (including Galtung & Ruge's news factors, and others put forward by Schlesinger, Bell, Bednarek & Caple), with considerable overlap but also disagreement as to what should be included. News values can relate to aspects of events and actors, or to aspects of news gathering and processing: Values in news actors and events: Frequency: Events that occur suddenly and fit well with the news organization's schedule are more likely to be reported than those that occur gradually or at inconvenient times of day or night. Long-term trends are not likely to receive much coverage. Timeliness: Events that have only just happened, are current, ongoing, or are about to happen are newsworthy. Familiarity: To do with people or places close to the target audience. Others prefer the term Proximity for this news value, which includes geographical and cultural proximity (see "meaningfulness"). Negativity: Bad news is more newsworthy than good news. Sometimes described as "the basic news value." Conversely, it has also been suggested that Positivity is a news value in certain cases (such as sports news, science news, feel-good tabloid stories). Conflict: Opposition of people or forces resulting in a dramatic effect. Events with conflict are often quite newsworthy. Sometimes included in Negativity rather than listed as a separate news value. Unexpectedness: Events that are out of the ordinary, unexpected, or rare are more newsworthy than routine, unsurprising events. Unambiguity: Events whose implications are clear make for better copy than those that are open to more than one interpretation, or where any understanding of the implications depends on first understanding the complex background in which the events take place. Personalization: Events that can be portrayed as the actions of individuals will be more attractive than one in which there is no such "human interest." Personalization is about whether an event can be contextualised in personal terms (affecting or involving specific, "ordinary" people, not the generalised masses). Meaningfulness: This relates to the sense of identification the audience has with the topic. "Cultural proximity" is a factor here—events concerned with people who speak the same language, look the same, and share the same preoccupations as the audience receive more coverage than those concerned with people who speak different languages, look different and have different preoccupations. A related term is Relevance, which is about the relevance of the event as regards the target readers/viewers own lives or how close it is to their experiences. Impact refers more generally to an event's impact, on the target audience, or on others. An event with significant consequences (high impact) is newsworthy. Eliteness: Events concerned with global powers receive more attention than those concerned with less influential nations. Events concerned with the rich, powerful, famous and infamous get more coverage. Also includes the eliteness of sources – sometimes called Attribution. Superlativeness: Events with a large scale or scope or with high intensity are newsworthy. Consonance: Events that fit with the media's expectations and preconceptions receive more coverage than those that defy them (and for which they are thus unprepared). Note this appears to conflict with unexpectedness above. However, consonance really refers to the media's readiness to report an item. Consonance has also been defined as relating to editors' stereotypes and their mental scripts for how events typically proceed. Values in the news process: Continuity: A story that is already in the news gathers a kind of inertia. This is partly because the media organizations are already in place to report the story, and partly because previous reportage may have made the story more accessible to the public (making it less ambiguous). Composition: Stories must compete with one another for space in the media. For instance, editors may seek to provide a balance of different types of coverage, so that if there is an excess of foreign news for instance, the least important foreign story may have to make way for an item concerned with the domestic news. In this way the prominence given to a story depends not only on its own news values but also on those of competing stories. Competition: Commercial or professional competition between media may lead journalists to endorse the news value given to a story by a rival. Co-option: A story that is only marginally newsworthy in its own right may be covered if it is related to a major running story. Prefabrication: A story that is marginal in news terms but written and available may be selected ahead of a much more newsworthy story that must be researched and written from the ground up. Predictability: An event is more likely to be covered if it has been pre-scheduled. Story impact: The impact of a published story (not the event), for example whether it is being shared widely (sometimes called Shareability), read, liked, commented-on. To be qualified as shareable, a story arguably has to be simple, emotional, unexpected and triggered. Engaging with such analytics is now an important part of newsroom practice. Time constraints: Traditional news media such as radio, television and daily newspapers have strict deadlines and a short production cycle, which selects for items that can be researched and covered quickly. Logistics: Although eased by the availability of global communications even from remote regions, the ability to deploy and control production and reporting staff, and functionality of technical resources can determine whether a story is covered. Data: Media need to back up all of their stories with data in order to remain relevant and reliable. Reporters prefer to look at raw data in order to be able to take an unbiased perspective. An alternative term is Facticity – the favouring of facts and figures in hard news. One of the key differences in relation to these news values is whether they relate to events or stories. For example, composition and co-option both relate to the published news story. These are news values that concern how news stories fit with the other stories around them. The aim here is to ensure a balanced spread of stories with minimal duplication across a news program or edition. Such news values are qualitatively different from news values that relate to aspects of events, such as Eliteness (the elite status of news actors or sources) or Proximity (the closeness of the event's location to the target audience). Audience perceptions of news Conventional models concentrate on what the journalist perceives as news. But the news process is a two-way transaction, involving both news producer (the journalist) and the news receiver (the audience), although the boundary between the two is rapidly blurring with the growth of citizen journalism and interactive media. Little has been done to define equivalent factors that determine audience perception of news. This is largely because it would appear impossible to define a common factor, or factors, that generate interest in a mass audience. Basing his judgement on many years as a newspaper journalist Hetherington states that: "...anything which threatens people's peace, prosperity and well being is news and likely to make headlines." Whyte-Venables suggests audiences may interpret news as a risk signal. Psychologists and primatologists have shown that apes and humans constantly monitor the environment for information that may signal the possibility of physical danger or threat to the individual's social position. This receptiveness to risk signals is a powerful and virtually universal survival mechanism. A "risk signal" is characterized by two factors, an element of change (or uncertainty) and the relevance of that change to the security of the individual. The same two conditions are observed to be characteristic of news. The news value of a story, if defined in terms of the interest it carries for an audience, is determined by the degree of change it contains and the relevance that change has for the individual or group. Analysis shows that journalists and publicists manipulate both the element of change and relevance ('security concern') to maximize, or some cases play down, the strength of a story. Security concern is proportional to the relevance of the story for the individual, his or her family, social group and societal group, in declining order. At some point there is a Boundary of Relevance, beyond which the change is no longer perceived to be relevant, or newsworthy. This boundary may be manipulated by journalists, power elites and communicators seeking to encourage audiences to exclude, or embrace, certain groups: for instance, to distance a home audience from the enemy in time of war, or conversely, to highlight the plight of a distant culture so as to encourage support for aid programs. In 2018, Hal Pashler and Gail Heriot published a study showing that perceptions of newsworthiness tend to be contaminated by a political usefulness bias. In other words, individuals tend to view stories that give them "ammunition" for their political views as more newsworthy. They give credence to their own views. Evolutionary perspectives An evolutionary psychology explanation for why negative news have a higher news value than positive news starts with the empirical observation that the human perceptive system and lower level brain functions have difficulty distinguishing between media stimuli and real stimuli. These lower level brain mechanisms which function on a subconscious level make basic evaluations of perceptive stimuli, focus attention on important stimuli, and start basic emotional reactions. Research has also found that the brain differentiates between negative and positive stimuli and reacts quicker and more automatically to negative stimuli which are also better remembered. This likely has evolutionary explanations with it often being important to quickly focus attention on, evaluate, and quickly respond to threats. While the reaction to a strong negative stimulus is to avoid, a moderately negative stimulus instead causes curiosity and further examination. Negative media news is argued to fall into the latter category which explains their popularity. Lifelike audiovisual media are argued to have particularly strong effects compared to reading. Women have on average stronger avoidance reactions to moderately negative stimuli. Men and women also differ on average in how they enjoy, evaluate, remember, comprehend, and identify with the people in news depending on if the news are negatively or positively framed. The stronger avoidance reaction to moderately negative stimuli has been explained as it being the role of men in evolutionary history to investigate and potentially respond aggressively to threats while women and children withdrew. It has been claimed that negative news are framed according to male preferences by the often male journalists who cover such news and that a more positive framing may attract a larger female audience. However, other scholars have urged caution as regards evolutionary psychology's claims about gender differences. See also Afghanistanism Agenda-setting theory News bias Mass media impact on spatial perception Media imperialism Media transparency Reporting bias Systemic bias The Media Equation Notes References External links Chart – Real and Fake News (2016)/Vanessa Otero (basis) (Mark Frauenfelder) Chart – Real and Fake News (2014) (2016)/Pew Research Center Discursive News Values Analysis Journalism standards Business culture Mass media events
0.762975
0.988963
0.754554
Learning Management
Learning Management is the capacity to design pedagogic strategies that achieve learning outcomes for students. The learning management concept was developed by Richard Smith of Central Queensland University (Australia) and is derived from architectural design (an artful arrangement of resources for definite ends) and is best rendered as design with intent. Learning management then means an emphasis on ‘the design and implementation of pedagogical strategies that achieve learning outcomes. That is, in the balance between and emphasis on curriculum development and pedagogy, the emphasis is definitely on pedagogical strategies. Underpinning the learning management premise is a new set of knowledge and skills, collectively referred to as a futures orientation and which attempts to prepare the mindsets and skillsets of teaching graduates for conditions of social change that pervade local and global societies in the 2000s. The practitioner of learning management is referred to as a learning manager. Adjunct to the theory and practice of learning management is the Learning Management Design Process (LMDP). The LMDP is a curriculum planning process comprising 8 'learning design based' questions. The process was developed by Professor David Lynch of Central Queensland University in 1998 and is used primarily as a tool to train teachers to teach [3]. These 'eight questions' when answered in sequence focus the teacher to what is important when planning to teach students. The LMDP organizes its 8 questions through three sequential phases: Outcomes, Strategy, and Evidence. Each phase represents the bodies of information that its associated questions seek to pursue. The LMDP represents a rethink of the various curriculum development models that have predominated the planning of teaching and curriculum in the developed world over past decades. The teacher develops their 'teaching plan' by engaging with each phase and its questions and recording ‘findings’ (or answers) in plan form. Definition A learning management system (LMS) is a software application or Web-based technology that ranges from managing training and training records to distributing courses to employees/students over the internet. Typically, LMS' provide an employer/instructor with a way to create and deliver specialized content, monitor employee/student participation, and assess their overall performance and completion of the required courses. A learning management system may also provide employees/students with the ability to use interactive features such as managing courses, online assessment, threaded discussions, video conferencing, and discussion forums to reach their full potential. This software allows for the employee/student to take learning into their own hands while either staying current in their specific field or branch out and learning new skills. What is needed in a Learning Environment Online learning environments are a fairly new and fast-growing industry that is available to many individuals and companies around the world. It is important for the learning environment to offer a secure place where a large number of people can come to receive training and new skills so that they can grow and learn in their fields. Many times it is no longer possible for managers and professors to get an entire group of people together for a course or mandatory training. Companies and institutes are finding it hard to keep track of paperwork proving training completion, forms, and evaluations. With LMS these problems are solved with everything now digital and available with just a few strokes on a keyboard. Some of the return that companies get back from investments in LMS’ are the ability to quickly train and track the learning of employees, and the ability to better train employees, avoid fines by being able to quickly showing compliance as well as giving employees room to grow and learn with a full scale of training. Because LMS is such an industry, the market is continuously growing and improving its services. Many of these companies encourage informational feedback from clients on what is working for them and what is not. Learning Management Industry In the relatively new LMS market, commercial vendors for corporate and education applications range from new entrants to those that entered the market in the nineties. In addition to commercial packages, many open-source software solutions are available. In 2005, LMSs represented a fragmented $500 million market (CLO magazine). The six largest LMS product companies constitute approximately 43% of the market. In addition to the remaining smaller LMS product vendors, training outsourcing firms, enterprise resource planning vendors, and consulting firms all compete for part of the learning management market. LMS buyers are less satisfied than a year ago. According 2005 and 2006 surveys by the American Society for Training and Development (ASTD), respondents that were very unsatisfied with an LMS purchase doubled and those that were very satisfied decreased by 25%. The number that was very satisfied or satisfied edged over 50%. (About 30% were somewhat satisfied.) Nearly one quarter of respondents intended to purchase a new LMS or outsource their LMS functionality over the next 12 months. In a 2009 survey, a growing number of organizations reporting deploying an LMS as part of larger Enterprise Resource Planning (ERP) systems. Channel learning is underserved. For many buyers, channel learning is not their number one priority, according to a survey by Training Outsourcing. Often there is a disconnect when the Human Resources department oversees training and development initiatives, where the focus is consolidating LMS systems inside traditional corporate boundaries. Software technology companies are at the front end of this curve, placing a higher priority on channel training. References Pedagogy
0.791906
0.952827
0.75455
Trait leadership
Trait leadership is defined as integrated patterns of personal characteristics that reflect a range of individual differences and foster consistent leader effectiveness across a variety of group and organizational situations. The theory is developed from early leadership research which focused primarily on finding a group of heritable attributes that differentiate leaders from nonleaders. Leader effectiveness refers to the amount of influence a leader has on individual or group performance, followers’ satisfaction, and overall effectiveness. Many scholars have argued that leadership is unique to only a select number of individuals, and that these individuals possess certain immutable traits that cannot be developed. Although this perspective has been criticized immensely over the past century, scholars still continue to study the effects of personality traits on leader effectiveness. Research has demonstrated that successful leaders differ from other people and possess certain core personality traits that significantly contribute to their success. Understanding the importance of these core personality traits that predict leader effectiveness can help organizations with their leader selection, training, and development practices. History of research The emergence of the concept of trait leadership can be traced back to Thomas Carlyle's "great man" theory, which stated that "The History of the World [...] was the Biography of Great Men". Subsequent commentators interpreted this view to conclude that the forces of extraordinary leadership shape history. Influenced by Carlyle, Francis Galton in Hereditary Genius took this idea further. Galton found that leadership was a unique property of extraordinary individuals and suggested that the traits that leaders possessed were immutable and could not be developed. Throughout the early 1900s, the study of leadership focused on traits. Cowley commented that the approach to the research of leadership has usually been and should always be through the study of traits. Many theorists, influenced by Carlyle and Galton, believed that trait leadership depended on the personal qualities of the leader, however, they did not assume that leadership only resides within a select number of people. This trait perspective of leadership was widely accepted until the late 1940s and early 1950s, when researchers began to deem personality traits insufficient in predicting leader effectiveness. In 1948, Stogdill stated that leadership exists between persons in a social situation, and that persons who are leaders in one situation may not necessarily be leaders in other situations. This statement has been cited ubiquitously as sounding the death knell for trait-leadership theory. Furthermore, scholars commented that any trait's effect on leadership behavior will always depend on the situation. Subsequently, leadership stopped being characterized by individual differences, and instead both behavioral and situational analyses of leadership took over. These analyses began to dominate the field of leadership research. During this period of widespread rejection, several dominant theories took the place of trait leadership theory, including Fiedler's contingency model, Blake and Mouton's managerial grid, Hersey and Blanchard's situational leadership model, and transformational and transactional leadership models. Despite the growing criticisms of trait leadership, the purported basis for the rejection of trait-leadership models began to encounter strong challenges in the 1980s. Zaccaro pointed out that even Stogdill's 1948 review, although cited as evidence against leader traits, contained conclusions supporting that individual differences could still be predictors of leader effectiveness. With an increasing number of empirical studies directly supporting trait leadership, traits have reemerged in the lexicon of the scientific research into leadership. In recent years, the research about leader traits has made some progress in identifying a list of personality traits that are highly predictive of leader effectiveness. Additionally, to account for the arguments for situational leadership, researchers have used the round-robin design methodology to test whether certain individuals emerge as leaders across multiple situations. Scholars have also proposed new ways of studying the relationship of certain traits to leader effectiveness. For instance, many suggest the integration of trait and behavioral theories to understand how traits relate to leader effectiveness. Furthermore, scholars have expanded their focus and have proposed looking at more malleable traits (ones susceptible to development) in addition to the traditional dispositional traits as predictors of leader effectiveness. Context is only now beginning to be examined as a contributor to leaders' success and failure. Productive narcissistic CEOs like Steven Jobs of Apple and Jack Welch of GE have demonstrated a gift for creating innovation, whereas leaders with idealized traits prove more successful in more stable environments requiring less innovation and creativity. Cultural fit and leadership value can be determined by evaluating an individual's own behavior, perceptions of their employees and peers, and the direct objective results of their organization, and then comparing these findings against the needs of the company. Leadership traits The investigations of leader traits are always by no means exhaustive. In recent years, several studies have made comprehensive reviews about leader traits that have been historically studied. There are many ways that traits related to leadership can be categorized; however, the two most recent categorizations have organized traits into (1) demographic vs. task competence vs. interpersonal and (2) distal (trait-like) vs. proximal (state-like): Demographic, task competence and interpersonal leadership Based on a recent review of the trait leadership literature, Derue et al stated that most leader traits can be organized into three categories: demographic, task competence, and interpersonal attributes. For the demographics category, gender has by far received the most attention in terms of leadership; however, most scholars have found that male and female leaders are both equally effective. Task competence relates to how individuals approach the execution and performance of tasks. Hoffman et al grouped intelligence, conscientiousness, openness to experience, and emotional stability into this category. Lastly, interpersonal attributes are related to how a leader approaches social interactions. According to Hoffman et al, Extraversion and Agreeableness should be grouped into this category. Distal (trait-like) vs. proximal (state-like) Recent research has shifted from focusing solely on distal (dispositional/trait-like) characteristics of leaders to more proximal (malleable/state-like) individual differences often in the form of knowledge and skills. The hope is that emergence of proximal traits in trait leadership theory will help researchers elucidate the old question whether leaders are born or made. Proximal individual differences suggest that the characteristics that distinguish effective leaders from non-effective leaders are not necessarily stable through the life-span, implying that these traits may be able to be developed. Hoffman et al examined the effects of distal vs. proximal traits on leader effectiveness. They found that distal individual differences of achievement motivation, energy, flexibility, dominance, honesty/integrity, self-confidence, creativity, and charisma were strongly correlated with leader effectiveness. Additionally, they found that the proximal individual differences of interpersonal skills, oral communication, written communication, management skills, problem solving skills, and decision making were also strongly correlated with leader effectiveness. Their results suggested that on average, distal and proximal individual differences have a similar relationship with effective leadership. Trait-leadership model Zaccaro et al created a model to understand leader traits and their influence on leader effectiveness/performance. This model, shown in figure 1, is based on other models of leader traits and leader effectiveness/performance. and rests on two basic premises about leader traits. The first premise is that leadership emerges from the combined influence of multiple traits as opposed to emerging from the independent assessment of traits. Zaccaro argued that effective leadership is derived from an integrated set of cognitive abilities, social capabilities, and dispositional tendencies, with each set of traits adding to the influence of the other. The second premise is that leader traits differ in their proximal influence on leadership. This model is a multistage one in which certain distal attributes (i.e. dispositional attributes, cognitive abilities, and motives/values) serve as precursors for the development of proximal personal characteristics (i.e. social skills, problem solving skills and expertise knowledge). Adopting this categorization approach and based on several comprehensive reviews/meta-analysis of trait leadership in recent years, we tried to make an inclusive list of leader traits (Table 1). However, the investigations of leader traits are always by no means exhaustive. Other models of trait leadership Multiple models have been proposed to explain the relationship of traits to leader effectiveness. Recently, integrated trait leadership models were put forward by summarizing the historical findings and reconciling the conflict between traits and other factors such as situations in determining effective leadership. In addition to Zaccaro's Model of Leader Attributes and Leader Performance described in the previous section, two other models have emerged in recent trait leadership literature. The Leader Trait Emergence Effectiveness (LTEE) Model, created by Judge, Piccolo, & Kosalka in 2009, combines the behavioral genetics and evolutionary psychology theories of how personality traits are developed into a model that explains leader emergence and effectiveness. Additionally, this model separates objective and subjective leader effectiveness into different criterion. The authors created this model to be broad and flexible as to diverge from how the relationship between traits and leadership had been studied in past research. Another model that has emerged in the trait leadership literature is the Integrated Model of Leader Traits, Behaviors, and Effectiveness. This model combines traits and behaviors in predicting leader effectiveness and tested the mediation effect of leader behaviors on the relationship between leader traits and effectiveness. The authors found that some types of leader behaviors mediated the effect between traits and leader effectiveness. The results of a Derue et al study supported an integrated trait-behavioral model that can be used in future research. Criticisms of trait leadership Although there has been an increased focus by researchers on trait leadership, this theory remains one of the most criticized theories of leadership. Over the years, many reviewers of trait leadership theory have commented that this approach to leadership is "too simplistic", and "futile". Additionally, scholars have noted that trait leadership theory usually only focuses on how leader effectiveness is perceived by followers rather than a leader's actual effectiveness. Because the process through which personality predicts the actual effectiveness of leaders has been relatively unexplored. these scholars have concluded that personality currently has low explanatory and predictive power over job performance and cannot help organizations select leaders who will be effective. Furthermore, Derue et al found that leader behaviors are more predictive of leader effectiveness than are traits. Another criticism of trait leadership is its silence on the influence of the situational context surrounding leaders. Stogdill found that persons who are leaders in one situation may not be leaders in another situation. Complementing this situational theory of leadership, Murphy wrote that leadership does not reside in the person, and it usually requires examining the whole situation. In addition to situational leadership theory, there has been growing support for other leadership theories such as transformational, transactional, charismatic, and authentic leadership theories. These theories have gained popularity because they are more normative than the trait and behavioral leadership theories. Previously, studies failed to uncover a trait or group of traits that are consistently associated with leadership emergence or help differentiate leaders from followers, but more recent research supports a link between narcissism and the emergence of leadership. Additionally, trait leadership's focus on a small set of personality traits and neglect of more malleable traits such as social skills and problem solving skills has received considerable criticism. Lastly, trait leadership often fails to consider the integration of multiple traits when studying the effects of traits on leader effectiveness. Implications for practice Given the recent increase in evidence and support of trait leadership theory, scholars have suggested a variety of strategies for human resource departments within organizations. Companies should use personality traits as selection tools for identifying emerging leaders. These companies, however, should be aware of the individual traits that predict success in leader effectiveness as well as the traits that could be detrimental to leader effectiveness. For example, while Derue et al found that individuals who are high in Conscientiousness, Extraversion, and Agreeableness are predicted to be more likely to be perceived as successful in leadership positions, Judge et al wrote that individuals who are high in narcissism are more likely to be a liability in certain jobs. Narcissism is just one example of a personality trait that should be explored further by HR practitioners to ensure they are not placing individuals with certain traits in the wrong positions. Complementing the suggestion that personality traits should be used as selection tools, it was found that the Big Five Personality traits were more strongly related to leadership than intelligence. This finding suggests that selecting leaders based on their personality is more important than selecting them based on intelligence. If organizations select leaders based on intelligence, it is recommended that these individuals be placed in leadership positions when the stress level is low and the individual has the ability to be directive. Another way in which HR practitioners can use the research on trait leadership is for leadership development programs. Although inherent personality traits (distal/trait-like) are relatively immune to leadership development, Zaccaro suggested that proximal traits (state-like) will be more malleable and susceptible to leadership development programs. Companies should use different types of development interventions to stretch the existing capabilities of their leaders. There is also evidence to suggest that Americans have an Extrovert Ideal, which dictates that people, most times unconsciously, favor the traits of extroverted individuals and suppress the qualities unique to introverts. Susan Cain's research points to a transition sometime around the turn of the century during which we stopped evaluating our leaders based on character and began judging them instead based on personality. While both extroverted and introverted leaders have been shown to be effective, we have a general proclivity towards extroverted traits, which when evaluating trait leadership, could skew our perception of what's that important. See also Notes Footnotes References Further reading Arvey, R. D., Rotundo, M., Johnson, W., Zhang, Z., & McGue, M. (2006). The determinants of leadership role occupancy: Genetic and personality factors. The Leadership Quarterly, 17, 1-20. Barrick, M. R., Stewart, G. L., & Piotrowski, M. (2002). Personality and job performance: Test of the mediating effects of motivation among sales representatives. Journal of Applied Psychology, 87(1), 43-51. Hogan, R. (1983). A socioanalytic theory of personality. In M. M. Page (ed.), 1982 Nebraska symposium on motivation (pp. 55−89). Lincoln, NE: University of Nebraska Press. Hogan, R. (1996). A socioanalytic perspective on the five-factor model. In J. S. Wiggins (ed.), The five-factor model of personality (pp. 163−179). New York: Guilford Press. Ilies, R., Arvey, R. D., & Bouchard, T. J. (2006). Darwinism, behavioral genetics, and organizational behavior, A review and agenda for future research. Journal of Organizational Behavior, 27(2), 121-141. Johnson, A. M., Vernon, P. A., Harris, J. A., & Jang, K. L. (2004). A behavioral investigation of the relationship between leadership and personality. Twin Research, 7, 27−32. Kirkpatrick, S. A., & Locke, E. A. (1996). Direct and indirect effects of three core charismatic leadership components on performance and :attitudes. Journal of Applied Psychology, 81(1), 36-51. Turkheimer, E. (2000). Three laws of behavior genetics and what they mean. Current Directions in Psychological Science, 9(5), 160-164. Leadership Industrial and organizational psychology Leadership studies
0.767978
0.98242
0.754477
Engaged theory
Engaged theory is a methodological framework for understanding the social complexity of a society, by using social relations as the base category of study, with the social always understood as grounded in the natural, including people as embodied beings. Engaged theory progresses from detailed, empirical analysis of the people, things, and processes of the world to abstract theory about the constitution and social framing of people, things, and processes. As a type of critical theory, engaged theory is cross-disciplinary, drawing from sociology, anthropology, and political studies, history, philosophy, and global studies to engage with the world whilst seeking to change the world. Examples of engaged theory are the constitutive abstraction approach of writers, such as John Hinkson, Geoff Sharp, and Simon Cooper, who published in Arena Journal; and the approach developed at the Centre for Global Research of the Royal Melbourne Institute of Technology, Australia by scholars such as Manfred Steger, Paul James and Damian Grenfell, who draw from the works of Pierre Bourdieu, Benedict Anderson, and Charles Taylor, et al. Politics of engagement Engaged theory research is in the world and of the world, whereby a theory somehow affects what occurs in the world, but engaged theory does not always include itself into a theory about the constitution of ideas and practices, which the sociologist Anthony Giddens identifies as a double hermeneutic movement. Engaged theory is explicit about its political standpoint, thus, in Species Matters: Human Advocacy and Cultural Theory, Carol J. Adams explained that: “Engaged theory ... arises from anger about what is, theory that envisions what is possible. Engaged theory makes change possible.” Moreover, in the praxis of engaged theory, theoreticians must be aware of their own tendencies to be ideologically driven by the dominant concerns of the time in which the theory is presented; for example, the ideology of Liberalism is reductive in its advocacy of and for 'freedom', fails to reflect upon the influence of the ideology of the liberal advocate. Grounding of analysis All social theories are dependent upon a process of abstraction. This is what philosophers call epistemological abstraction. However, they do not characteristically theorize their own bases for establishing their standpoint. Engaged theory does. By comparison, grounded theory, a very different approach, suggests that empirical data collection is a neutral process that gives rise to theoretical claims out of that data. Engaged theory, to the contrary, treats such a claim to value neutrality as naively unsustainable. Engaged theory is thus reflexive in a number of ways: Firstly, it recognises that doing something as basic as collecting data already entails making theoretical presuppositions. Secondly, it names the levels of analysis from which theoretical claims are made. Engaged theory works across four levels of theoretical abstraction. (See below: § Modes of analysis.) Thirdly, it makes a clear distinction between theory and method, suggesting that a social theory is an argument about a social phenomenon, while an analytical method or set of methods is defined a means of substantiating that theory. Engaged theory in these terms works as a 'Grand method', but not a 'grand theory'. It provides an integrated set of methodological tools for developing different theories of things and processes in the world. Fourthly, it seeks to understand both its own epistemological basis, while treating knowledge formation as one of the basic ontological categories of human practice. Fifthly, it treats history as a modern way of understanding temporal change; and therefore different ontologically from a tribal saga or cosmological narrative. In other words, it provides meta-standpoint on its own capacity to historicize. Modes of analysis In the version of Engaged theory developed by an Australian-based group of writers, analysis moves from the most concrete form of analysis—empirical generalization—to more abstract modes of analysis. Each subsequent mode of analysis is more abstract than the previous one moving across the following themes: 1. doing, 2. acting, 3. relating, 4. being. This leads to the 'levels' approach as set out below: 1. Empirical analysis (ways of doing) The method begins by emphasizing the importance of a first-order abstraction, here called empirical analysis. It entails drawing out and generalizing from on-the-ground detailed descriptions of history and place. This first level involves generating empirical description based on observation, experience, recording or experiment—in other words, abstracting evidence from that which exists or occurs in the world—or it involves drawing upon the empirical research of others. The first level of analytical abstraction is an ordering of ‘things in the world’, in a way that does not depend upon any kind of further analysis being applied to those ‘things’. For example, the Circles of Sustainability approach is a form of engaged theory distinguishing (at the level of empirical generalization) between different domains of social life. It can be used for understanding and assessing quality of life. Although that approach is also analytically defended through more abstract theory, the claim that economics, ecology, politics and culture can be distinguished as central domains of social practice has to be defensible at an empirical level. It needs to be useful in analysing situations on the ground. The success or otherwise of the method can be assessed by examining how it is used. One example of use of the method was a project on Papua New Guinea called Sustainable Communities, Sustainable Development. 2. Conjunctural analysis (ways of acting) This second level of analysis, conjunctural analysis, involves identifying and, more importantly, examining the intersection (the conjunctures) of various patterns of action (practice and meaning). Here the method draws upon established sociological, anthropological and political categories of analysis such as production, exchange, communication, organization and inquiry. 3. Integrational analysis (ways of relating) This third level of entry into discussing the complexity of social relations examines the intersecting modes of social integration and differentiation. These different modes of integration are expressed here in terms of different ways of relating to and distinguishing oneself from others—from the face-to-face to the disembodied. Here we see a break with the dominant emphases of classical social theory and a movement towards a post-classical sensibility. In relation to the nation-state, for example, we can ask how it is possible to explain a phenomenon that, at least in its modern variant, subjectively explains itself by reference to face-to-face metaphors of blood and place—ties of genealogy, kinship and ethnicity—when the objective 'reality' of all nation-states is that they are disembodied communities of abstracted strangers who will never meet. This accords with Benedict Anderson's conception of 'imagined communities', but recognizes the contradictory formation of that kind of community. 4. Categorical analysis (ways of being) This level of enquiry is based upon an exploration of the ontological categories (categories of being such as time and space). If the previous form of analysis emphasizes the different modes through which people live their commonalities with or differences from others, those same themes are examined through more abstract analytical lenses of different grounding forms of life: respectively, embodiment, spatiality, temporality, performativity and epistemology. At this level, generalizations can be made about the dominant modes of categorization in a social formation or in its fields of practice and discourse. It is only at this level that it makes sense to generalize across modes of being and to talk of ontological formations, societies as formed in the uneven dominance of formations of tribalism, traditionalism, modernism or postmodernism. See also Anthropology Antipositivism Arena (Australian publishing co-operative) Critical theory Critical animal studies Epistemology Grounded theory Post-Marxism Quality of life Ontology Social change Sociology References Further reading Critical theory Historiography Philosophical methodology Social theories
0.788446
0.956894
0.754459
Cultural trait
A cultural trait is a single identifiable material or non-material element within a culture, and is conceivable as an object in itself. Similar traits can be grouped together as components, or subsystems of culture; the terms sociofact and mentifact (or psychofact) were coined by biologist Julian Huxley as two of three subsystems of culture—the third being artifacts—to describe the way in which cultural traits take on a life of their own, spanning over generations. In other words, cultural traits can be categorized into three interrelated components: Artifacts — the objects, material items, and technologies created by a culture, or simply, things people make. They provide basic necessities, recreation, entertainment, and most of the things that make life easier for people. Examples include clothing, food, and shelter. Sociofacts — interpersonal interactions and social structures; i.e., the structures and organizations of a culture that influence social behaviour. This includes families, governments, education systems, religious groups, etc. Mentifact (or psychofact) — abstract concepts, or "things in the head;" i.e., the shared ideas, values, and beliefs of a culture. This can include religion, language, and ideas. Moreover, sociofacts are considered by some to be mentifacts that have been shared through artifacts. This formulation has been related to memetics and the memetic concept of culture. These concepts have been useful to anthropologists in refining the definition of culture. Development These concepts have been useful to anthropologists in refining the definition of culture, which Huxley views as contemplating artifacts, mentifacts, and sociofacts. For instance, Edward Tylor, the first academic anthropologist, included both artifacts and abstract concepts like kinship systems as elements of culture. Anthropologist Robert Aunger, however, explains that such an inclusive definition ends up encouraging poor anthropological practice because "it becomes difficult to distinguish what exactly is not part of culture." Aunger goes on to explain that, after the cognitive revolution in the social sciences in the 1960s, there is "considerable agreement" among anthropologists that a mentifactual analysis, one that assumes that culture consists of "things in the head" (i.e. mentifacts), is the most appropriate way to define the concept of culture. Sociofact The idea of the sociofact was developed extensively by David Bidney in his 1967 textbook Theoretical Anthropology, in which he used the term to refer to objects that consist of interactions between members of a social group. Bidney's 'sociofact' includes norms that "serve to regulate the conduct of the individual within society." The concept has since been used by other philosophers and social scientists in their analyses of varying kinds of social groups. For instance, in a discussion of the semiotics of the tune 'Taps', semiotician of music Charles Boilès claims that although it is a single piece of music, it can be seen as three distinct musical sociofacts: as a "last call" signal in taverns frequented by soldiers; as an "end of day" signal on military bases; and hence, symbolically, as a component of military funerals. The claim has been made that sociofactual analysis can play a decisive role for the performance of, and collaboration within, organizations. See also Meme Cultural universal References Cultural anthropology Memetics Cultural concepts Semiotics
0.771775
0.977537
0.754439
Positive youth development
Positive youth development (PYD) programs are designed to optimize youth developmental progress. This is sought through a positivistic approach that emphasizes the inherent potential, strengths, and capabilities youth hold. PYD differs from other approaches within youth development work in that it rejects an emphasis on trying to correct what is considered wrong with children's behavior or development, renouncing a problem-oriented lens. Instead, it seeks to cultivate various personal assets and external contexts known to be important to human development. Youth development professionals live by the motto originally coined by Karen Pittman, "problem free is not fully prepared", as they work to grow youth into productive members of society. Seen through a PYD lens, young people are not regarded as "problems to be solved"; rather, they are seen as assets, allies, and agents of change who have much to contribute in solving the problems that affect them most. Programs and practitioners seek to empathize with, educate, and engage children in productive activities in order to help youth "reach their full potential". Though the field is still growing, PYD has been used across the world to address social divisions, such as gender and ethnic differences. Background Positive youth development originated from ecological systems theory to focus on the strengths of adolescents. Central to this theory is the understanding that there are multiple environments that influence children. Similar to the principles of positive psychology, the theory of PYD suggests that "if young people have mutually beneficial relations with the people and institutions of their social world, they will be on the way to a hopeful future marked by positive contributions to self, family, community, and civil society." The major catalyst of positive youth development came as a response to the punitive methods of the "traditional youth development" approach. The traditional approach makes a connection between the changes occurring during adolescent years and the beginning or peaking of several public health and social problems, including homicide, suicide, substance use and abuse, sexually transmitted infections, teen and unplanned pregnancies. This connection was made infamous by developmental psychologist G. Stanley Hall who described adolescence as a time of "storm and stress". Another aspect of the traditional approach is that many professionals and mass media portrayed adolescents as inevitable problems that simply needed to be fixed. This "fixing" motivated the "solving" of single-problem behavior, such as substance abuse. Specific evidence of this "problem-centered" model is present across professional fields that deal with young people. Language that reflects this approach includes the “at-risk child” and “the juvenile delinquent”. Many connections can also be made to the current U.S. criminal justice model that favors punishment as opposed to prevention. The concept and practice of positive youth development "grew from the dissatisfaction with a predominant view that underestimated the true capacities of young people by focusing on their deficits rather than their development potential." PYD asserts that youth have inherent strengths and if given opportunities, support, and acknowledgement they can thrive. Encouraging the positive development of adolescents can ease the transition into healthy adulthood. Therefore, emphasis is placed on asset-building. Crucial to the outlining of asset-building is Peter Benson's list of developmental assets. This list is divided into two categories: internal assets (positive individual characteristics) and external assets (community characteristics). Furthermore, research findings point out that PYD provides a sense of “social belonging”, participatory motivation in academic-based and community activities for positive educational outcomes, a sense of social responsibility and civic engagement, and participation in organized activities that would aid in self-development. Goals PYD focuses on the active promotion of optimal human development, rather than on the scientific study of age related change, distinguishing it from the study of child development or adolescent development. or as solely a means of avoiding risky behaviors. Rather than grounding its developmental approach in the presence of adversity, risk or challenge, a PYD approach considers the potential and capacity of each individual young person. A hallmark of these programs is that they are based on the concept that children and adolescents have strengths and abilities unique to their developmental stage and that they are not merely "inadequate" or "undeveloped" adults. Lerner and colleagues write: "The goal of the positive youth development perspective is to promote positive outcomes. This idea is in contrast to a perspective that focuses on punishment and the idea that adolescents are broken". Positive youth development is both a vision, an ideology and a new vocabulary for engaging with youth development. Its tenets can be organized into the 5 C's which are: competence, confidence, connection, character, and caring. When these 5 C's are present, the 6th C of "contribution" is realized. Key features Positive youth development programs typically recognize contextual variability in youths' experience and in what is considered healthy or optimal development for youth in different settings or cultures. This cultural sensitivity reflects the influence of Bronfenbrenner's ecological systems theory. The influence of ecological systems theory is also seen on the emphasis many youth development programs place on the interrelationship of different social contexts through which the individual moves (e.g. family, peers, school, work, and leisure). This means that PYD seeks to involve youth in multiple kinds of prosocial relationships to promote the young person's wellness, safety, and healthy maturation. Such engagement may be sought "within their communities, schools, organizations, peer groups, and families". As a result, PYD seeks to build "community capacity". The community is involved in order to facilitate a sense of security and identity. Likewise, youth are encouraged to be involved in the community. The University of Minnesota's Keys to Quality Youth Development summarizes eight key elements of programs that successfully promote youth development. Such programs are physically and emotionally safe, give youth a sense of belonging and ownership, foster self-worth, facilitates discovery of their "selves" (identities, interests, strengths), foster high-quality and supportive relationships with peers and adults, help youth recognize conflicting values and develop their own, foster the development of new skills, creates a fun environment, and develops hope for the future. In addition, programs that employ PYD principles generally have one or more of the following features: promote bonding foster resilience promote social, emotional, cognitive, behavioral, and moral competence encourages service foster self-determination foster spirituality foster self-efficacy foster clear and positive identity foster belief in the future sets expectations facilitation of identity creation provide recognition for positive behavior and opportunities for pro-social involvement promote empowerment promotes responsibility foster pro-social norms Using PYD to address stereotypes and inequality Gender Positive youth development principles can be used to address gender inequities through the promotion of programs such as "Girls on the Run." Physical activity-based programs like "Girls on the Run" are being increasingly used around the world for their ability to encourage psychological, emotional, and social development for youth. "Girls on the Run" enhances this type of physical activity program by specifically targeting female youth in an effort to reduce the gendered view of a male-dominated sports arena. "Girls on the Run" is a non-profit organization begun in 1996 that distributes a 12-week training program to help girls prepare for a 5k running competition. This particular program is made available to 3rd through 5th grade female students throughout the United States and Canada to be implemented in either school or community-based settings. Another example of positive youth development principles being used to target youth gender inequities can be seen in that of a participatory diagramming approach in Kibera, Kenya. This community development effort enabled participants to feel safe discussing their concerns regarding gender inequities in the community with the dominant male group. This approach also enabled youth to voice their needs and identify potential solutions related to topics like HIV/AIDS and family violence. Ethnic minorities in the United States Positive youth development can be used to combat negative stereotypes surrounding youth of minority ethnic groups in the U.S. after-school programs have been directly geared to generate increased participation for African American and Latino youth with a focus on academic achievement and increasing high school graduation rates. Studies have found programs targeting African American youth are more effective when they work to bolster a sense of their cultural identity. PYD has even been used to help develop and strengthen the cultural identities of American Indian and Alaskan Native youth. PYD methods have been used to provide a supportive setting in which to engage youth in traditional activities. Various programs have been implemented related to sports, language, and arts and crafts. Sports programs that use positive youth development principles are commonly referred to as "sports-based youth development" (SBYD) programs. SBYD incorporates positive youth development principles into program and curricula design and coach training. Many factors, such as low income, redlining, racial barriers and racial prejudice, mental health illness or challenges and substance abuse, have impacted ethnic minorities in the United States. Youth who are at-risk of falling into negative behaviors need positive youth development programs to help them avoid going to juvenile system. Research shows that there is improvement in youth's behavior with PYD, "Programs consisting of repressive and punitive elements were ineffective, whereas programs targeting positive social relations of at-risk youth (providing informal and supportive social control) proved to be successful". When PYD is incorporated in after-school programs, youth receive academic support and mental health services. PYD also provides mentors who lend support to youth and encourage them to believe in themselves, despite what the system and society tells them. Models of implementation Asia The key constructs of PYD listed above have been generally accepted throughout the world with some regional distinctions. For example, a Chinese Positive Youth Development Scale has been developed to conceptualize how these features are applicable to Chinese youth. The Chinese Positive Youth Development Scale was used as a measure in a study of Chinese youth in secondary schools in Hong Kong that indicated positive youth development has a direct impact on life satisfaction and reducing problem behavior for Chinese youth. One specific example of PYD implementation is seen in the project "P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) to Adulthood: A Jockey Club Youth Enhancement Scheme." This program targets junior secondary school students in Hong Kong (grades 7 through 9 in the North American System). The program is composed of two terms, the first of which is a structured curriculum focusing on the 15 PYD constructs and designed for all students as a "universal prevention initiative." The Tier 2 Program is a more selective prevention model directly targeting students with greater psychosocial needs identified by the school social work service providers. The label "at-risk" is intentionally avoided because the term denotes a very negative stigma in Chinese culture, and therefore discourages participation in the program. Although Chinese social work agencies commonly target students with greater psychosocial needs, these PYD programs have rarely undergone thorough systemic evaluation and documentation. Europe In Portugal, the utility of positive youth development principles in sporting contexts is beginning to be recognized. Several athletic-based programs have been implemented in the country, but more research is necessary to determine their effectiveness at this point. Latin America and the Caribbean Positive youth development has also been seen in the form of youth volunteer service throughout Latin America and the Caribbean. From Mexico and the Caribbean to Central and South America, this form of implementation has been acknowledged for encouraging both personal and community development, while oftentimes contributing to poverty reduction. It has furthermore been seen as a way of promoting civil engagement through various service opportunities in communities. Positive youth development efforts can be seen in the work of the United States Agency for International Development (USAID) in collaboration with various regional governments and the private sector across Latin America and the Caribbean. This work has focused on providing broader educational options, skills training, and opportunities for economically disadvantaged youth to obtain apprenticeships. The ¡Supérate! Centers across El Salvador are one example, as they are supported by USAID in combination with private companies and foundations, and offer expanded education for high-performing students from poorer economic backgrounds. As of 2011, there were 7 centers in El Salvador and USAID expressed plans to expand this model across Central America. In Brazil, the Jovem Plus program offers high-demand skills training for youth in disadvantaged communities in Rio de Janeiro and the northeastern area of the nation. Other programs include the "Youth Movement against Violence" in Guatemala and "Youth Upliftment through Employment" in Jamaica. USA The rates of juvenile offenders were increasing, as youth were steering to bad habits affecting their academic standing and outside of school. The rates of juvenile offenders affected the community's well-being, so it became a governmental issue to find positive development solutions for youth to behave well at schools and elsewhere. The government realized they would need to start working with youth at the school level, as youth who got suspended have a higher chance of getting involved in the juvenile system. A debate that has been happening is the socio-emotional learning (SEL) program that consists of Monarch Room(MR) intervention, a trauma-informed alternative to school discipline. The MR was to promote socio-emotional regulation, and the staff were trained in counseling and trauma-informed to help the youth with sensory states, thoughts, feelings, and "subsequent behaviors". The research for SEL was a 10-year study, and the results showed that Grade 9 students had the highest use of the MR, and, on average, students used it five times a year. The program was successful overall as it showed interest in the youth wanting support, and the introduction of MR led to a decrease in the use of school suspension. However, there was no comparison group to help determine if the decreased levels of the School Disciplinary Act (SDA) were due to the MR initiative. Another solution up for debate to reduce school suspension is the Positive Behaviour Interventions and Support framework (PBIS). This program worked in 3 tiers approach to improve school climate. Tier 1 is teaching the expectations to all students; tier 2 is target support for the small groups of students displaying challenging behavior; tier 3 is individually intervening when working with students with intense behavioral needs. PBIS did find a statistical difference between the schools using PBIS and not for reducing SDA for all students, particularly students with disability and BIPOC students. However, the researchers did acknowledge that using a PBIS framework does not significantly affect the most severe behaviors, e.g., weapons offenses, because, as an intervention, it does not target those types of incidents. PBIS is a proactive and preventative approach. The ratings from the participants were overwhelmingly positive; however, there are concerns about the time requirement to implement the study, which is worth exploring further. An additional solution is Restorative Practices, which are associated with reduced suspension rates and suggest that school-based restorative practices are a promising approach to reducing exclusionary discipline outcomes. The practices are to build a positive school culture and environment. They focus on the problem and not blaming or punishing. To see the effectiveness of this study, they looked at interviews, focus groups, observations, school artifacts, and suspension data to determine the effectiveness of RJP. RJP uses responsive circles, mediations, and re-entry circles for students involved in conflict. They implement RJP to facilitate conflict resolution and remove policies that compete with these practices, i.e., punitive consequences. See also Comprehensive sex education Culture and positive psychology Growth mindset Positive education Youth services Youth intervention References External links Youth
0.775977
0.972214
0.754416
Theory-theory
The theory-theory (or theory theory) is a scientific theory relating to the human development of understanding about the outside world. This theory asserts that individuals hold a basic or 'naïve' theory of psychology ("folk psychology") to infer the mental states of others, such as their beliefs, desires or emotions. This information is used to understand the intentions behind that person's actions or predict future behavior. The term 'perspective taking' is sometimes used to describe how one makes inferences about another person's inner state using theoretical knowledge about the other's situation. This approach has become popular with psychologists as it gives a basis from which to explore human social understanding. Beginning in the mid-1980s, several influential developmental psychologists began advocating the theory theory: the view that humans learn through a process of theory revision closely resembling the way scientists propose and revise theories. Children observe the world, and in doing so, gather data about the world's true structure. As more data accumulates, children can revise their naive theories accordingly. Children can also use these theories about the world's causal structure to make predictions, and possibly even test them out. This concept is described as the 'Child Scientist' theory, proposing that a series of personal scientific revolutions are required for the development of theories about the outside world, including the social world. In recent years, proponents of Bayesian learning have begun describing the theory theory in a precise, mathematical way. The concept of Bayesian learning is rooted in the assumption that children and adults learn through a process of theory revision; that is, they hold prior beliefs about the world but, when receiving conflicting data, may revise these beliefs depending upon their strength. Child development Theory-theory states that children naturally attempt to construct theories to explain their observations. As all humans do, children seek to find explanations that help them understand their surroundings. They learn through their own experiences as well as through their observations of others' actions and behaviors. Through their growth and development, children will continue to form intuitive theories; revising and altering them as they come across new results and observations. Several developmentalists have conducted research of the progression of their theories, mapping out when children start to form theories about certain subjects, such as the biological and physical world, social behaviors, and others' thoughts and minds ("theory of mind"), although there remains controversies over when these shifts in theory-formation occur. As part of their investigative process, children often ask questions, frequently posing "Why?" to adults, not seeking a technical and scientific explanation but instead seeking to investigate the relation of the concept in question to themselves, as part of their egocentric view. In a study where Mexican-American mothers were interviewed over a two-week period about the types of questions their preschool children ask, researchers discovered that the children asked their parents more about biology and social behaviors rather than nonliving objects and artifacts. In their questions, the children were mostly ambiguous, unclear if they desired an explanation of purpose or cause. Although parents will usually answer with a causal explanation, some children found the answers and explanations inadequate for their understanding, and as a result, they begin to create their own theories, particularly evident in children's understanding of religion. This theory also plays a part in Vygotsky's social learning theory, also called modeling. Vygotsky claims that humans, as social beings, learn and develop by observing others' behavior and imitating them. In this process of social learning, prior to imitation, children will first post inquiries and investigate why adults act and behave in a particular way. Afterwards, if the adult succeeds at the task, the child will likely copy the adult, but if the adult fails, the child will choose not to follow the example. Comparison with other theories Theory of mind (ToM) Theory-theory is closely related to theory of mind (ToM), which concerns mental states of people, but differs from ToM in that the full scope of theory-theory also concerns mechanical devices or other objects, beyond just thinking about people and their viewpoints. Simulation theory In the scientific debate in mind reading, theory-theory is often contrasted with simulation theory, an alternative theory which suggests simulation or cognitive empathy is integral to our understanding of others. References Cognitive psychology Child development
0.781879
0.964869
0.754411
Asynchronous learning
Asynchronous learning is a general term used to describe forms of education, instruction, and learning that do not occur in the same place or at the same time. It uses resources that facilitate information sharing outside the constraints of time and place among a network of people. In many instances, well-constructed asynchronous learning is based on constructivist theory, a student-centered approach that emphasizes the importance of peer-to-peer interactions. This approach combines self-study with asynchronous interactions to promote learning, and it can be used to facilitate learning in traditional on-campus education, distance education, and continuing education. This combined network of learners and the electronic network in which they communicate are referred to as an asynchronous learning network. Online learning resources that can be used to support asynchronous learning include email, electronic mailing lists, threaded conferencing systems, online discussion boards, wikis, and blogs. Course management systems have been developed to support online interaction, allowing users to organize discussions, post and reply to messages, and upload and access multimedia. These asynchronous forms of communication are sometimes supplemented with synchronous components, including text and voice chat, telephone conversations, videoconferencing, and even meetings in virtual spaces such as Second Life, where discussions can be facilitated among groups of students. History The roots of asynchronous learning are in the end of the 19th century, when formalized correspondence education (or distance learning) first took advantage of the postal system to bring physically remote learners into the educational fold. The 1920s and 1930s saw the introduction of recorded audio, desynchronizing broadcasting and revolutionizing the mass dissemination of information. The first significant distribution of standardized educational content took place during World War II; the branches of the US military produced hundreds of training films, with screenings numbering in the millions. Online asynchronous learning began with schools' and universities' substantial investment in computer technology in the early 1980s. With seminal applications such as Seymour Papert's Logo programming language, students were able to learn at their own pace, free from the synchronous constraints of a classroom lecture. As computers entered more households and schools began connecting to the nascent Internet, asynchronous learning networks began to take shape. These networks augmented existing classroom learning and led to a new correspondence model for solitary learners. Using the web, students could access resources online and communicate asynchronously using email and discussion boards. The 1990s saw the arrival of the first telecampuses, with universities offering courses and entire degree plans through a combination of synchronous and asynchronous online instruction. Today, advanced multimedia and interactivity have enhanced the utility of asynchronous learning networks and blurred the divide between content-creator and content-consumer. New tools like class blogs and wikis are creating ever-richer opportunities for further asynchronous interaction and learning. Development of an asynchronous community Though the social relationships integral to group learning can be developed through asynchronous communication, this development tends to take longer than in traditional, face-to-face settings. The establishment of an asynchronous community takes time and effort and tends to follow a projected course of five stages, as described by Waltonen-Moore et al.: Introductions – This might include a full biography or a short "getting-to-know-you" questions. Through this step, community members begin to see one another as human beings and begin to make a preliminary, emotive connection with the other members of the community. This step is often characterized by emotive or extravagant language and represents group members' attempts to make themselves known as living individuals behind the emotionless technology medium. Identify with the group – Members begin to communicate with one another by reference to their commonalities as group members and seek to either establish or make known norms for successful membership. If this sense of group identity is not established, the likelihood of poor participation or attrition increases. Interact – Members will start interacting with one another in reference to the community's established focus and begin to share information with one another. If the community is an online learning course, then students will begin to discuss course content. Group cohesion and individual reflection – members of the group will begin to validate one another's ideas and opinions while, at the same time, being reflective of their own. Expansive questioning – Now feeling completely comfortable within the environment, focused upon the content, and respectful of other group members' thoughts and experiences, members will begin to not only post facts and deeply held beliefs, but will actually start to "think out loud", allowing other group members to take part in their personal meaning-making and self-directed inquiry. Asynchronous communities that progress efficiently through these stages tend to share at least three common attributes: First, the community has an active facilitator who monitors, guides, and nurtures the discourse. Unguided communities tend to have difficulty progressing beyond the second stage of development, because group members can become distracted from the community's intended purpose. Second, rather than seeking to take on the role of an instructor or disseminator of knowledge, the facilitator recognizes that knowledge is an individual construct that is developed through interaction with other group members. Thus, facilitators within successful communities tend not to be pedantic, but supportive. And third, successful asynchronous communities permit a certain amount of leniency for play within their discourse. That is, communities that insist upon being overly stringent on etiquette and make no room for the social development that comes from play seem to drive away participants. Rather than enriching discourse on the targeted topic, such attitudes have a negative impact on group identity development and individual comfort levels which will, in turn, decrease overall involvement. Roles of instructors and learners Online learning requires a shift from a teacher-centered environment to a student-centered environment where the instructor must take on multiple new roles. The constructivist theory that supports asynchronous learning demands that instructors become more than dispensers of knowledge; it requires that they become instructional designers, facilitators, and assessors of both grades and their teaching methods. As instructional designers, emphasis is placed on establishing the curriculum, methods and the media through which the content will be effectively delivered. Once the design is in place and executed, the instructor must then facilitate the communication and direct the learning. Instructors typically have to be proficient with elements of electronic communication, as asynchronous courses are reliant on email and discussion board communications and the instruction methods are reliant on virtual libraries of e-documents, graphics, and audio files. Establishing a communal spirit is vital, requiring much time commitment from the instructor, who must spend time reading, assessing, reinforcing, and encouraging the interaction and learning that is happening. The student-centered nature of asynchronous online learning requires students to be actively involved with and take more responsibility for their own learning. In addition to their normal duties as learners, students are required to: Become proficient with the technology required for the course; Use new methods of communication with both peers and instructors; Strengthen their interdependency through collaboration with their peers. Strengths Asynchronous learning's greatest benefit to students is the freedom it gives them to access the course and its instructional materials at any time they choose, and from any location, with an Internet connection. This allows for accessibility for diverse student populations, ranging from traditional, on-campus students, to working professionals, to international students in foreign countries. Asynchronous learning environments provide a "high degree of interactivity" between participants who are separated both geographically and temporally and afford students many of the social benefits of face-to-face interaction. Since students can express their thoughts without interruption, they have more time to reflect on and respond to class materials and their classmates than in a traditional classroom. Research shows that the time required to initially design an asynchronous course is comparable to that of a traditional synchronous course. However, most asynchronous courses have the potential to reach far more students than a traditional course and course-wide updates or modifications can be disseminated far more quickly and efficiently than traditional lecture models. Schifter notes that a perceived additional workload is a significant barrier to faculty participation in distance education and asynchronous learning, but that perception can be mitigated through training and experience with teaching in these environments. Another advantage of asynchronous learning (and, as technology develops, many synchronous learning environments) is that there is a record of nearly everything that occurs in that environment. All materials, correspondence, and interactions can be electronically archived. Participants can go back and review course materials, lectures, and presentations, as well as correspondence between participants. This information is generally available at any time to course participants. Shortcomings Asynchronous learning environments pose several challenges for instructors, institutions, and students. Course development and initial setup can be costly. Institutions must provide a computer network infrastructure, including servers, audio/visual equipment, software, and the technical support needed to develop and maintain asynchronous learning environments. Technical support includes initial training and setup, user management, data storage and recovery, as well as hardware repairs and updates. Research indicates faculty members who are hesitant to teach in asynchronous learning environments are so because of a lack of technical support provided by their institutions. However, for faculty to teach successfully in an asynchronous learning environment, they must be technically adept and comfortable enough with the technological tools to optimize their use. According to a recent case study in India, asynchronous learning during COVID-19 pandemic is quite stressful among students because it placed more responsibilities and made students feel frustrated and insecure. To participate in asynchronous learning environments, students must also have access to computers and the Internet. Although personal computers and web access are becoming more and more pervasive every day, this requirement can be a barrier to entry for many students and instructors. Students must also have the computer/technology skills required to participate in the asynchronous learning program. See also Blended learning E-learning Educational technology Networked learning Synchronous learning Augmented learning Asynchronous conferencing References External links The Sloan-C International Conference on Asynchronous Learning ALTMODES-Alternative Modes of Delivery: Asynchronous Learning Educational technology Pedagogy
0.764607
0.986662
0.754409
Social conflict theory
Social conflict theory is a Marxist-based social theory which argues that individuals and groups (social classes) within society interact on the basis of conflict rather than consensus. Through various forms of conflict, groups will tend to attain differing amounts of material and non-material resources (e.g. the wealthy vs. the poor). More powerful groups will tend to use their power in order to retain power and exploit groups with less power. Conflict theorists view conflict as an engine of change, since conflict produces contradictions which are sometimes resolved, creating new conflicts and contradictions in an ongoing dialectic. In the classic example of historical materialism, Karl Marx and Friedrich Engels argued that all of human history is the result of conflict between classes, which evolved over time in accordance with changes in society's means of meeting its material needs, i.e. changes in society's mode of production. Example (sample of the following) Consider the relationship between the owner of a housing complex and a tenant in that same housing complex. A consensus theorist might suggest that the relationship between the owner and the tenant is founded on mutual benefit. In contrast, a conflict theorist might argue the relationship is based on a conflict in which the owner and tenant are struggling against each other. Their relationship is defined by the balance in their abilities to extract resources from each other, e.g. rent payments or a place to live. The bounds of the relationship are set where each is extracting the maximum possible amount of resources out of the other. Conflict can take many forms and involve struggle over many different types of resources, including status. However, formal conflict theory had its foundations in the analysis of class conflict, and the example of the owner and the tenant can be understood in terms of class conflict. In class conflict, owners are likely to have relative advantages over non-owners. For example, the legal system underlying the relationship between the owner and tenant can be biased in favor of the owner. Suppose the owner wishes to keep the tenant's security deposit after that tenant has moved out of the owner's residence. In legal systems based on English common law, the owner is only required to notify the tenant that the security deposit is being withheld. To regain the security deposit, the tenant must file a lawsuit. The tenant bears the burden of proof and is therefore required to prove that the residence was adequately cleaned before move-out. This can be a very difficult or even impossible task. To summarize the example, conflict theorists view the relationship between the owner and tenant as being built primarily on conflict rather than harmony. Even though the owner-tenant relationship may often appear harmonious, any visible harmony is only a product of the law and other elements of the superstructure which constrain the relationship and which are themselves a product of an even deeper conflict, class conflict. A conflict theorist would say that conflict theory holds more explanatory power than consensus theory in this situation since consensus theory cannot explain lawsuits between owners and tenants nor the legal foundations of the asymmetrical power relationship between the two. Social conflict theories From a social-conflict theorist/Marxist point of view social class and inequality emerges because the social structure is based on conflict and contradictions. Contradictions in interests and conflict over scarce resources between groups is the foundation of social society, according to the social conflict theory. The higher class will try to maintain their privileges, power, status and social position—and therefore try to influence politics, education, and other institutions to protect and limit access to their forms of capital and resources. Whereas the lower class—in contradiction to the higher class—has very different interests. They do not have specific forms of capital that they need to protect. All they are interested in is in gaining access to the resources and capital of the higher class. For example, education: the lower class will do everything to gain access to the higher class resources based on democratizing and liberalizing education systems because these forms of capital are thought to be of value for future success. The various institutions of society such as the legal and political system are instruments of ruling class domination and serve to further its interests. Marx believed that western society developed through four main epochs—primitive communism, ancient society, feudal society and capitalist society. Primitive communism is represented by the societies of pre-history and provides the only example of the classless society. From then all societies are divided into two major classes—master and slaves in ancient society, lords and serfs in feudal society and capitalist and wage laborers in capitalist society. Weber sees class in economic terms. He argues that classes develop in market economies in which individuals compete for economic gain. He defines a class as a group of individuals who share a similar position in market economy and by virtue of that fact receive similar economic rewards. Thus a person's class situation is basically his market situation. Those who share a similar class situation also share similar life chances. Their economic position will directly affect their chances of obtaining the things defined as desirable in their society. Social conflict theory is also used to understand gender inequalities. One theory that is based on social-conflict ideas is radical feminist theory and feminism in general. According to a professor of political science in Belgrade, Jelena Vukoičić, radical feminism is a feminist theory that starts from the idea of conflict between the sexes as a fundamental conflict, and oppression against women as a direct implication of patriarchy. This theory rests on the assumption that all social activity is the result of certain restrictions and coercion, and although every social system contains specific forms of interactive constraints, they do not have to cause repression. See also Identity politics Iron law of oligarchy Group decision-making Marxist cultural analysis Evidence-based policy Autonomy References Marx, Karl. 1971. Preface to A Contribution to the Critique of Political Economy, Tr. S. W. Ryanzanskaya, edited by M. Dobb. London: Lawrence & Whishart. Skocpol, Theda. 1980. States and Social Revolutions: A Comparative Analysis of France, Russia, and China. New York: Cambridge University Press. Wallerstein, Immanuel M. 1974. The Modern World-System: Capitalist Agriculture and the Origins of the European World-Economy in the Sixteenth Century. New York: Academic Press. 1980. The Modern World-System II: Mercantilism and the Consolidation of the European World-Economy, 1600–1750. New York: Academic Press. External links Critical Theory Sociological theories Marxist theory Identity politics Social conflict Change Power (social and political) theories
0.766947
0.983631
0.754393
Augmented learning
Augmented learning is an on-demand learning technique where the environment adapts to the learner. By providing remediation on-demand, learners can gain greater understanding of a topic while stimulating discovery and learning. Technologies incorporating rich media and interaction have demonstrated the educational potential that scholars, teachers and students are embracing. Instead of focusing on memorization, the learner experiences an adaptive learning experience based upon the current context. The augmented content can be dynamically tailored to the learner's natural environment by displaying text, images, video or even playing audio (music or speech). This additional information is commonly shown in a pop-up window for computer-based environments. Most implementations of augmented learning are forms of e-learning. In desktop computing environments, the learner receives supplemental, contextual information through an on-screen, pop-up window, toolbar or sidebar. As the user navigates a website, e-mail or document, the learner associates the supplemental information with the key text selected by a mouse, touch or other input device. In mobile environments, augmented learning has also been deployed on tablets and smartphones. Augmented learning is often used by corporate learning and development providers to teach innovative thinking and leadership skills by emphasizing “learning-by-doing”. Participants are required to apply the skills gained from e-learning platforms to real life examples. Data is used to create a personalized learning program for each participant, providing supplemental information and remediation. Augmented learning is closely related to augmented intelligence (intelligence amplification) and augmented reality. Augmented intelligence applies information processing capabilities to extend the processing capabilities of the human mind through distributed cognition. Augmented intelligence provides extra support for autonomous intelligence and has a long history of success. Mechanical and electronic devices that function as augmented intelligence range from the abacus, calculator, personal computers and smart phones. Software with augmented intelligence provide supplemental information that is related to the context of the user. When an individual's name appears on the screen, a pop-up window could display a person's organizational affiliation, contact information and most recent interactions. In mobile reality systems, the annotation may appear on the learner's individual "heads-up display" or through headphones for audio instruction. For example, apps for Google Glasses can provide video tutorials and interactive click-throughs, . Foreign language educators are also beginning to incorporate augmented learning techniques to traditional paper-and-pen-based exercises. For example, augmented information is presented near the primary subject matter, allowing the learner to learn how to write glyphs while understanding the meaning of the underlying characters. See Understanding language, below. Just-in-time understanding and learning Augmentation tools can help learners understand issues, acquire relevant information and solve complex issues by presenting supplementary information at the time of need or "on demand." This contrasts with traditional methods of associative learning, including rote learning, classical conditioning and observational learning, where the learning is performed in advance of the learner's need to recall or apply what has been learned. Snyder and Wilson assert that just-in-time learning is not sufficient. Long-term learning demands continuous training should be individualized and built upon individual competencies and strengths. Understanding language Augmented learning tools have been useful for learners to gain an enhanced understanding of words or to understand a foreign language. The interactive, dynamic nature of these on-demand language assistants can provide definitions, sample sentences and even audible pronunciations. When sentences or blocks of text are selected, the words are read aloud while the user follows along with the native text or phonetics. Speech rate control can tailor the text-to-speech (TTS) to keep pace with the learner's comprehension. The use of augmented learning has already been implemented in schools across the world. New technology is constantly being developed to hone the skills of students both in and out of the classroom. People other than students can also use these resources to learn or develop their own language skills on their own time and in the language they choose to learn. Websites such as Rosetta Stone have been around for a number of years and allow for people of all ages from all around the world to learn a new language. Many of these applications and websites are pay to use. One application that allows for free learning is Duolingo that allows for both free learning and paid learning. Augmented learning allows for real time answers to student's quizzes and tests that provide feedback quicker than in class discussion. This type of feedback also allows for students to move through the class at their own pace. If an answer is correct the student may move on to new and more challenging questions. While if the answer is incorrect the student may be prompted to study more and are given more practice questions based on the incorrect answer. In an in-person classroom students have to move at the pace of the rest of the class which may cause students to not gain a full understanding of the content and leave many struggling to keep up. The use of it also has shown more correct answers up to 95% of the time for their reading. Most forms of augmented learning can be found on the internet through websites apps on your mobile device. This allows for students and regular people that are interested in learning a new language the ease of access not found in a standard textbook. This also allows for learning where every you may go without the restriction a classroom would hold. These applications also allow for more direct one on one instruction that would not be found in a classroom of twenty plus students. Students are able to submit an answer and get an immediate score back for their work. The downside of augmented learning for language learning is that it may end up putting language teachers out of the job. As these programs develop, they may prove to be far more effective at teaching students than a physical teacher ever could be. This would put thousands of language teachers across the globe out of the job and force them in fields they may not enjoy as much. Other problems associated with augmented language learning is the extensive use of technology with no face-to-face learning. Students may suffer from fatigue from sitting at a computer for hours a day which would affect their learning and development in the class. Social isolation from online language learning may also occur. It would affect students' mental health not having regular face to face interaction with other students. While this may kind of learning may help some students it could have severely harmful effects on others. Tech problems are also an everyday problem we all struggle with. Issues with the website or application may cause students to miss assignments or an online class and harm their grade because of an issue they have no control over. Issues like this presented themselves fully when the COVID-19 pandemic began and showed that many students would have technology issues leaving them stranded in their learning. The use of augmented learning poses many pros and cons that may entice schools to adapt and use the programs provided to a greater extent. As these programs grow and develop, more and more students will become more efficient in the classroom and any real-world situations where they can use the information gained from augmented learning to their advantage. Understanding Science Augmented reality has come a long way in the science field, but it is still in its infancy. They have started using webcams, making them read a certain marker label, then an object where the label would be comes up on the screen. Developers are continuing to gather information on how AR (augmented reality) could take its part in the learning environment. Over the past few years there have been technologies added to the classroom such as computers, laptops, projectors, white boards and much more. It is allowing students to be more engaged in what is happening. Students are also now able to take notes without having to listening to what the teacher is saying, but instead writing what they typed on the projector. The notes can be more thorough and to the point, rather than an entire explanation. With the help of AR we can also see pictures on the board showing students the space in between certain objects such as planets or atoms. In modern times, augmented learning can help students understand complex topics with more success than the traditional model. One such example of a complex scientific topic benefiting from a virtual environment is continuous distillation. This topic is usually challenging for students to grasp as a diagram can only show states rather than a continuous change. Practical experiments are also less effective as the distillation process cannot be seen through the vessel it occurs in. A study on augmented learning with the topic of continuous distillation states, "Overall, the authors garnered that the use of virtual tools helps enhance and enrich the students’ learning by increasing their understanding of key concepts and promotes interest in the subject matter." This conclusion was made because the test scores of students with the augmented learning method performed significantly better than the students who did not experience the augmented learning method. This is caused by the augmented learning application's ability to look inside the vessel the process takes place in throughout each step of the process. When given a complex topic that is hard to visualize, augmented learning can provide a view to a student that aids in their learning. It was also found to be beneficial when the students could engage with this medium in their own time rather than it being accessible only within the classroom, as the application was always available to the students through their phone. As the level of basic technology rises, it becomes more feasible for teachers to shift their focus away from creating the learning environment and instead supplementing the learning tools. Outside of the classroom, augmented learning is important for informal STEM (Science, Technology, Engineering and Math) education. This form of education can be seen within museums, science centers, and anywhere outside the classroom. With widespread use of smart devices, augmented reality is feasible to incorporate into every person's education. Pokémon GO, the 2016 hit mobile game, actively pushed their augmented reality features. This was the first example of widespread use of augmented reality that opened over 750 million people to the idea of the virtual world interacting with our own. The STEM field began to take notice and implemented this innovative technology into museum exhibits and other informal learning environments. An investigation of seventeen published papers about augmented learning in informal settings found that sixteen of these papers were based on scientific fields. This is understandable when the common informal learning environment is museums that are commonly focused around STEM. Augmented learning exhibits found that they could engage a younger demographic easier than the practically demonstrated exhibits. Augmented learning for scientific purposes outside the classroom is widespread and accessible in the modern day. Furthermore, augmented learning research is mostly centered around how it affects the scientific field. Papers published between 2013 and 2018, the most common topics examined were mobile and e-learning environments. The most common subject discussed within these papers is science education. This makes sense as the field of science is directly correlated with augmented learning methods, such as easy access to technology and virtual environments. Scientific education is an especially easy to implement field for augmented learning. Many scientific topics are challenging to visualize with still images or in person demonstrations, but with modern tools the experience is improved. The study on continuous distillation proves the efficacy of a virtual environment that can provide additional context, even over a practical demonstration. The adoption of higher technology from the public, especially when the students are the main users of this technology outside the classroom, leads to the methods using it to be more impactful to their learning experience. Making learning fun One researcher has suggested that handheld devices like cell phones and portable game machines (Game Boy, PlayStation Portable) can make an impact on learning. These mobile devices excel in their portability, context sensitivity, connectivity and ubiquity. By incorporating social dynamics in a real-world context, learning games can create compelling environments for learners. At the Allard Pierson Museum in Amsterdam, visitors view information on-demand at the "A Future for the Past" exhibit. In a virtual reconstruction of Satricum and the Forum Romanum, users can call up information that is overlaid on room-sized photos and other images. The museum uses both stationary displays and mobile computers to allow users to view translucent images and information keyed to their specific interest. English Learning Learning English can be difficult no matter what age you are, but learning a language can be difficult for the younger age group due to retention and learning capabilities. But, with the use of technology it can help be made easy. Nowadays, one can listen to the pronunciation, look up spelling, and all sorts of things to help better a students education. In more recent years, learning English has been made much easier through the use of Artificial Intelligence and Virtual Reality technology. These technologies have allowed users to access language learning in methods not available previously, eliminating the need for a human tutor, and specialized institutional education. For a majority of these technologies being developed, English has been the focus, as it is a major lingua franca. However, these projects are being expanded to include major languages, minor languages, and even fictional languages such as 'Klingon' from the series Star Trek in the popular app Duolingo. A company known as 'Speaksy Labs' introduced their app 'Speak' which allows the user to be connected with a personal, AI tutor. With more popularized apps such as Duolingo, the user was only able to practice writing, reading, and basic speech. Starting with Speak, user's are able to have full on conversation, eliminating the need to find a personal human tutor. Studies done on English learning apps suggests that the design of these Artificial Intelligence apps are boosting user's English skills, making them a valuable resource for those trying to learn the language. The Artificial Intelligence can be trained to give the user specific feedback, including pronunciation, accent, and tone. Virtual Reality is another effective tool for English learning through its virtual environments, and high level of interaction. Studies have shown that activities that require movement help the users understand language more, especially with younger audiences. Since these virtual environments are boundless, they have access to many resources that can be provided in teaching language lessons. For example, you could hold an apple in a Virtual Reality setting and the word 'apple' can be said so that the user correlates the word with the object. Research conclusions suggest that most children obtain higher scores and satisfaction than those who don't use Virtual Reality technology for their learning.  Virtual environments allow the students to see places they may not have visiting before, eliminating the need for travel costs. A case study into the 'Elsa Speak' app showed similar results as to what was previously shared. In higher learning, the study found that students gained a boost in their English comprehension, allowing them to speak, write, and read the language more efficiently . Now, the Elsa Speak app and various counterparts are breaking into different languages, appealing to groups across the world. Application of technology in college With the expansion of the uses of technology students are able to learn in all sorts of matters. For the most part, college students nowadays have classes, assignments, homework, and projects online via different websites and turn them in via those websites. And now, teachers have found a practical use for some new technology within the classrooms. For example, schools nowadays have pieces of technology to help classes. Some school in medical schools a giant tablet the size of a table and use it to show bones, blood vessels, veins, and other parts of the body. In other colleges with similar programs they have games that they use in specific class subjects. Students who used this technology in the classroom tend to also have better study habits as well as better motivation to complete classwork High School and College education departments have started relying on forms of play to better understand class material. Quiz games, questionnaires, and team strategy games have been implemented to better help students understand class material. These technologies are also used to determine attendance which can be part of a students grade, by using the questionnaires to determine who was in class on a particular day. Even in higher education, research has found that learning through play keeps students involved in the material. In larger lecture halls, questionnaires are more prevalent, while team focused games are used in smaller class settings. In addition, specialized software allow students to present ideas in more effective ways. Some users are able to make their own websites, interactive portfolios, or just make simple PowerPoints. These various methods of learning engage the students in the class, while effectively preparing them better through exposure to the material in different ways. Is augmentation really "learning"? Critics may see learning augmentation as a crutch that precludes memorization; similar arguments have been made about using calculators in the past. Just as rote learning is also not a substitute for understanding, augmented learning is simply another faculty for helping learners recall, present and process information. Current research suggests that even unconscious visual learning can be effective. Visual stimuli, rendered in flashes of information, showed signs of learning even when the human adult subjects were unaware of the stimulus or reward contingencies. One way to look at augmentation is whether the process leads to improvement in terms of signal to noise ratio for the individual learner. Diverse predispositions among varied learners means there can be great disparity in signal processing by different learners for any one particular instruction method. Although people have their doubts about the effectiveness of augmented and virtual reality in classrooms, they are very useful tools for teachers and educators around the world. It is proven by studies that AR can improve student understanding of complex or invisible structures. AR and VR are capable of introducing children to new concepts in fun and encouraging ways that they cannot get from a chalk board or classroom. Not only do these realities help children learn, but they also help researchers identify what specific traits aid or hinder a child's learning capabilities.   These mediums for education include the ability to perform experiments virtually rather than in-person due to them proposing risk or health concerns. One study conducted was focused on the understanding of electromagnetism; In comparison to a non-AR environment, AR proved to increase the students’ ability to visualize structural phenomena, reduced cognitive load, and improved motivation and self-confidence. However, this seems to only be the case for physical experiences, and there has not been anything to prove AR's dominance in a person's understanding of theoretical concepts. It is for this reason that AR will not take over classrooms entirely but will be a great advancement in children's understanding of STEM concepts. It is important to remember that this is still an under-researched and relatively new technology for schools to implement. The technology that is used for creating AR experiences dates back to 1990, however, it was not widely used for educational purposes until around 2010. There is a lot that can be done to improve how AR is used in educational settings. It is also to be noted what types of studies were conducted. A large portion of the research documented is narrative-based and qualitative literature reviews, however, some suggest that it would be more accurate and credible to perform meta-analyses. A meta-analysis is in short, a way of gathering lots of individual research experiments and studying them to make more predictions on what the information tells us. This will provide a more valid representation of data found due to the sheer number of studies that were incorporated, along with the studies differing in their research methods, rather than the findings of just one case. AR opens a whole world of exploration for people to be able to not only learn, but to want to continue learning. Through the use of 3D VLE's, AR allows people to travel to any place that they like in seconds that would otherwise be either expensive or impossible to get to. This means that a person can be anywhere in the world and immerse themselves in the culture of the place that they travel to. AR makes travelling and studying phenomena more interactive and entices us to keep using it because of its easy access and fun nature. While AR is helpful in classrooms, it is found that informal learning settings, outside of the classroom, have a higher impact on a student's learning. There has not yet been a study with a large enough sample size to compare how AR effects learning in these informal settings, but as stated in the second paragraph of this sub-section, even unconsciously, people are able to learn through AR with no stimulus or reward. The group that AR has the most influential benefit on is Bachelor or equivalent level. It is believed that AR is less effective for younger children because of the complexity of operation and the overload of information. The study also backs up the previous claim that AR has a greater effect on individuals in engineering, manufacturing, and construction fields, and proves that social science understanding is not aided. Augmented Learning in Education Augmented learning has allowed not only students to learn, but also their parents. Tools like mobile games have made it easier for parents to understand more fully what their child is learning in school. Technology brings the child's content to a new platform which is helpful to parents when trying to make meaningful connections to what their child is learning in school. Furthermore, augmented reality has brought a new way of learning to young children to have the ability to articulate words. By using marker labels in books that are read by a tablet, making pictures appear on the screen along with audio narration for enhanced reading. Augmented Reality in education has the potential to change the timing and location of the conventional learning process. This style of learning introduces new methods of studying. With the boom of technology and younger students being the biggest users, the learning platform has the ability to connect this generation and their smartphones to gain knowledge. Though it has yet to be fully discovered, Augmented Reality in education is looking to become a large market. This style of learning can gain attention and expand the students interest in subject and topics he would not learn or come across in the conventional classroom lecture. Extra data such as fun facts, visual models, or historical data from events could give a wider understanding of the topics being taught. The learning platform hopes to explain abstract concepts, engage and interact with the learner, and discover and learn additional information about what they what to learn. See also Electronic learning Evidence-based learning Intelligence amplification References Sources Karacapilidis, Nikos (2009). Solutions and Innovations in Web-Based Technologies for Augmented Learning: Improved Platforms, Tools, and Applications. PA: IGI Global. , Milne, Andre J. (1999). Shaping the Future of Technology-Augmented Learning Environments: Report on a Planning Charrette at Stanford University. https://web.archive.org/web/20100708025551/http://www-cdr.stanford.edu/~amilne/Publish/SCUP-34_Abstract.PDF External links Loqu8 iCE Augmented learning software for understanding Chinese. Point or highlight Chinese text in webpages and documents. Displays definitions (in English, German and French), Pinyin and Bopomofo. Reads words aloud in Chinese (Mandarin, Cantonese). Augmented reality Augmented Reality Technology Brings Learning to Life Learning Cognitive science Intelligence
0.787109
0.958365
0.754338
Distributed leadership
Distributed leadership is a conceptual and analytical approach to understanding how the work of leadership takes place among the people and in context of a complex organization. Though developed and primarily used in education research, it has since been applied to other domains, including business and even tourism. Rather than focus on characteristics of the individual leader or features of the situation, distributed leadership foregrounds how actors engage in tasks that are "stretched" or distributed across the organization. With theoretical foundations in activity theory and distributed cognition, understanding leadership from a distributed perspective means seeing leadership activities as a situated and social process at the intersection of leaders, followers, and the situation. Background and origins Distributed leadership emerged in the early 2000s from sociological, cognitive, psychological, and anthropological theories, most importantly distributed cognition and activity theory, though also influenced by Wenger's communities of practice. It was conceived as a theoretical and analytical framework for studying school leadership, one that would explicitly focus attention on how leadership was enacted in schools, as an activity stretched across the "social and situational contexts." Leadership research up through the late 1990s focused on the specific traits, functions, or effects of individual leaders. Much of the work done in educational research focused exclusively on the principal and centered around defining the heroics of individuals. Descriptions were written of what was being done but not how, which limited transferability across contexts. From this research it was unclear how leaders responded to the complex environment in schools. Though some research on leadership has continued to focus on the role or function of the designated leader, such as instructional leadership or transformational leadership, there has also be a significant shift to understanding leadership as a shared effort by more than one person. The latter constructs look more broadly at various roles that provide forms of leadership throughout the school, including teacher leadership, democratic leadership, shared leadership, or collaborative leadership. Distributed leadership draws on these multi-agent perspectives to describe how actors work to establish the conditions for improving teaching and learning in schools. Distributed leadership is not an activity, rather a procedure Key concepts Leadership is defined as any "activities tied to the core work of the organization that are designed by organizational members to influence the motivation, knowledge, affect, or practices of other organizational members." Thus a leader is anyone who engages in these activities based on tasks, not position. As this definition implies, there is within an organization a group of people who are influenced by these leadership activities: these are the followers. Importantly, the role of a leader or follower is dynamic, and a person might be a follower in one situation but not in another. Additionally, followers are not passive recipients of these influences and followers may influence the leaders as well. Leader Plus The Leader Plus aspect posits leadership activity as a whole is stretched, or distributed, across many people. Leadership is often enacted with those not in official leadership positions, thus distributed leadership examines enactments of leadership activity rather than roles. The configurations of leadership activity might include collaborated, collective, or coordinated distribution. Collaborated distribution is where two or more leaders co-perform the leadership activity in the same place and time. In collective distribution, the performance of leadership actions is separate but the actions are interdependent. Coordinated distribution exists where the leadership activities are performed in a particular sequence. Leadership activities are dynamic and situated, thus these three categories do not correspond with particular types of activities or duties. This part of the framework foregrounds leadership activities and all individuals who contribute, avoiding the tendency to focus solely on designated leaders. Practice Practice is the product of interactions amongst leaders, followers, the situation over time. This is a key link to distributed cognition, where thinking and understanding is a process constituted of interactions with other people, tools, and routines, rather than independently. Research from a distributed perspective often takes a task-oriented approach as a way to break down practice into manageable units of analysis. Understanding how tasks are carried out and which are deemed important by leaders and followers gives a window into practice. Situation The situation comprises a complex web of material and social aspects of the environment, such as history, culture, physical environmental features, and policy environment, as well as more local aspects such as task complexity, organizational structure, or staff stability. The key here is to identify and focus on the "aspects of the situation that enable and constrain leadership practice but also captur[e] how they shape that practice." Whereas Contingency Theory describes the situation as merely the context within which individuals act, a distributed perspective looks to the situation as constitutive, in the sense that it both influences and is influenced by the actions of the people in it. Two aspects of the situation that are often foregrounded in a distributed perspective are tools and routines. Tools are objects designed with a purpose toward enabling some action. Perhaps the most obvious example of a tool is a hammer. In organizations, however, tools might be a rubric for assessing teaching or an attendance checklist. They are not just accessories or incidentals; they both enable and constrain practice. Tools help focus the users attention but can also obscure other elements. The attendance taker might check off all those students present and think the task is complete but fail to notice a student that is present but not on the list. In this way, the tool is constitutive of the task, not just an accessory. A routine is a regular sequence or pattern of actions that happen in an organization. This may or may not align with the tools. For example, with a rubric for assessing teaching, the associated routine might be when and how an instructional leader observes a class, such as instructional rounds. Tools, routines, and other aspects of the situation might be locally designed, received, or inherited. Importantly, tools and routines take a portion of the cognitive load required to complete a task (see Distributed Cognition). In the example of the rubric for assessing teaching, the principal doing the observations will be prompted for what to pay attention to and the routine will improve a consistency to the observation practice. Thus the enactment of leaderships in this situation is distributed across the principal, the teacher being observed, and the routine. Foundational theories Distributed cognition sits at the intersection of psychology, sociology, and cognitive science. It is essentially the theory that knowledge and the thinking with that knowledge are stretched across the tools, situation, other people, and context. It originated with the work of anthropologist Edwin Hutchins in the 1990s with his studies of navigation on a naval aircraft carrier. His work on understanding naturally situated cognition led to the conclusion that cognition is socially distributed. Rather than looking for knowledge structures within an individual, his work showed that cognitive activity, or knowing what to do, was a situated process, influenced by other people, tools, and the situation. Leadership is often studied as something that is done or acted out by an individual. Social or shared leadership approaches often still see leadership as actions done by individuals, just done in cooperation with others. Taking a distributed perspective, in contrast, draws on the theory of distributed cognition to understand leadership is an emergent property of the system. In this way, it sits in between those who see leadership is a result of individual agency and those who see it as an outcome of the situation. Activity Theory is a broad social sciences approach to understanding human behavior as contextualized in a situation. This situated perspective expands the unit of analysis to the collective rather than individual and studies the relation between actions. Although this approach is aimed at understanding the individual, the unit of analysis is the broader system in which that individual participates. Engestrom identifies three generations of activity theory and associated researcher: first generation, a model focused on the individual (subject-object-mediating artifact) by Lev Vygotsky (1978); second generation, expansion of the model to include collective action, by Alexei Leont'ev (1981); and third generation, toward a networked understanding of interactive activity systems, proposed by Engestrom himself (1987). Another Activity Theory scholar, Barbara Rogoff expands this work in two ways: first, foregrounding of the individual must be done without losing sight of the interdependence of the system; and second, there are three different levels of resolution (interpersonal, cultural/community, and institutional/cultural planes) are needed to understand the different levels activity. A distributed perspective on leadership takes this networked and multi-level approach to give "context of action" and "maintain... the tension between agency and distribution. Additionally, Spillane and Gronn both draw on an application of activity theory in the field of leadership research that grew out of Mintzberg’s studies of work-activity, observing managers through structured observations to document what they actually do. While innovative and exciting at the time, the nature of this documentation was ultimately deemed shallow as it did not differentiate between what was managerial and non-managerial work, there were still unanswered questions about how management was enacted, and it did not explain leadership effectiveness. Understanding leadership from a distributed perspective means looking for leadership activity as situated and social process, drawing on both distributed cognition and activity theory. Usage of the term "Distributed leadership" entered the leadership and organizational theory discourse and clearly appealed to various scholars, policy makers, administrators, and practitioners as they have used it to frame, describe, and promote their work. Some use it as a recipe for effective leadership or improving schools; others use it to prescribe optimal leadership or organizational structure. The most common alternative usage is equating distributed leadership with more than one designated leader, ideas such as shared, democratic, or collaborative leadership. Studies along these lines often look at the distribution of leadership roles. Interest in these alternative organizational structures reflect the increased demands on leaders in schools and changes in the demands on educational organizations, and the term "distributed leadership" gets used to represent this. Some worry that this overlap in usage results in a watering down of ideas or rebranding of old ideas in new terms. A distinction that helps unravel the mixed usage is to distinguish between distributed leadership as a conceptual or analytical framework versus distributed leadership as a normative or practical framework. Taking an analytical perspective is to understand leadership activities as a product of the interactions amongst leaders, followers, and the situation. This reflects the roots of the framework in distributed cognition and activity theory. A practical or normative approach is concerned with optimizing the distribution of leadership so as to improve organizations. In this case, research is focused on the effects of certain configurations of leadership roles or activities. While the use of Distributed Leadership as a term will continue to evolve as scholarship on the topic continues to develop, this distinction is important in maintaining common epistemologies for researchers, policy makers, administrators, and practitioners. Notes General references Harris, A. (2008). Distributed School Leadership: Developing tomorrow's leaders. New York: Routledge. Hutchins, E. (1995). Cognition in the Wild. Boston: MIT Press. Spillane, J. (2006). Distributed leadership. San Francisco: Jossey-Bass. Spillane, J. & Diamond, J. (2007). Distributed Leadership in Practice. New York: Teachers College Press. Spillane, J. P., Halverson, R., & Diamond, J. B. (2001). Investigating School Leadership Practice: A Distributed Perspective. Educational Researcher, (April), 23–28. External links Distributed Leadership Study at Northwestern University, led by James Spillane Distributed Leadership Project by the Australian Learning and Teaching Council (ALTC) and the Australian Government Office for Learning and Teaching (OLT) CALL: Comprehensive Assessment for Leadership and Learning, at the University of Wisconsin – Madison, led by Richard Halverson and Carolyn Kelley Educational administration Leadership Organizational theory
0.777136
0.970637
0.754317
Need theory
Need theory, also known as Three needs theory, proposed by psychologist David McClelland, is a motivational model that attempts to explain how the needs for achievement, affiliation, and power affect the actions of people from a managerial context. This model was developed in the 1960s, two decades after Maslow's hierarchy of needs was first proposed in the early 1940s. McClelland stated that every person has these three types of motivation regardless of age, sex, race, or culture. The type of motivation by which each individual is driven derives from their life experiences and the opinions of their culture. This need theory is often taught in classes concerning management or organizational behaviour. Need for achievement People who have a need for achievement prefer to work on tasks of moderate difficulty in which results are based on their efforts rather than on anything else to receive feedback on their work. Achievement based individuals tend to avoid both high-risk and low-risk situations. Low-risk situations are seen as too easy to be valid and the high-risk situations are seen as based more on the luck of the situation rather than the achievements that individual made. This personality type is motivated by accomplishment in the workplace and an employment hierarchy with promotional positions. Need for affiliation People who have a need for affiliation prefer to spend time creating and maintaining social relationships, enjoy being a part of groups, and have a desire to feel loved and accepted. People in this group tend to adhere to the norms of the culture in that workplace and typically do not change the norms of the workplace for fear of rejection. This person favors collaboration over competition and does not like situations with high risk or high uncertainty. People who have a need for affiliation work well in areas based on social interactions like customer service or client interaction positions. Need for power People who have a need for power prefer to work and place a high value on discipline. The downside to this motivational type is that group goals can become zero-sum in nature, that is, for one person to win, another must lose. However, this can be positively applied to help accomplish group goals and to help others in the group feel competent about their work. A person motivated by this need enjoys status recognition, winning arguments, competition, and influencing others. With this motivational type comes a need for personal prestige, and a constant need for a better personal status. Effect McClelland's research showed that 86% of the population are dominant in one, two, or all three of these three types of motivation. His subsequent research, published in the 1977 Harvard Business Review article "Power is the Great Motivator", found that those in top management positions had a high need for power and a low need for affiliation. His research also found that people with a high need for achievement will do best when given projects where they can succeed through their own efforts. Although individuals with a strong need for achievement can be successful lower-level managers, they are usually weeded out before reaching top management positions. He also found that people with a high need for affiliation may not be good top managers but are generally happier, and can be highly successful in non-leadership roles such as the foreign service. References Motivational theories
0.764307
0.98689
0.754287
Public
In public relations and communication science, publics are groups of individual people, and the public (a.k.a. the general public) is the totality of such groupings. This is a different concept to the sociological concept of the Öffentlichkeit or public sphere. The concept of a public has also been defined in political science, psychology, marketing, and advertising. In public relations and communication science, it is one of the more ambiguous concepts in the field. Although it has definitions in the theory of the field that have been formulated from the early 20th century onwards, and suffered more recent years from being blurred, as a result of conflation of the idea of a public with the notions of audience, market segment, community, constituency, and stakeholder. Etymology and definitions The name "public" originates with the Latin publicus (also poplicus), from populus, to the English word 'populace', and in general denotes some mass population ("the people") in association with some matter of common interest. So in political science and history, a public is a population of individuals in association with civic affairs, or affairs of office or state. In social psychology, marketing, and public relations, a public has a more situational definition. John Dewey defined public as a group of people who, in facing a similar problem, recognize it and organize themselves to address it. Dewey's definition of a public is thus situational: people organized about a situation. Built upon this situational definition of a public is the situational theory of publics by James E. Grunig , which talks of nonpublics (who have no problem), latent publics (who have a problem), aware publics (who recognize that they have a problem), and active publics (who do something about their problem). In public relations and communication theory, a public is distinct from a stakeholder or a market. A public is a subset of the set of stakeholders for an organization, that comprises those people concerned with a specific issue. Whilst a market has an exchange relationship with an organization, and is usually a passive entity that is created by the organization, public does not necessarily have an exchange relationship, and is both self-creating and self-organizing. Publics are targeted by public relations efforts. In this, target publics are those publics whose involvement is necessary for achieving organization goals; intervening publics are opinion formers and mediators, who pass information to the target publics; and influentials are publics that the target publics turn to for consultation, whose value judgements are influential upon how a target public will judge any public relations material. The public is often targeted especially in regard to political agendas as their vote is necessary in order to further the progression of the cause. As seen in Massachusetts between 2003 and 2004, it was necessary to "win a critical mass of states and a critical mass of public support" in order to get same-sex marriage passed in the commonwealth. Public relations theory perspectives on publics are situational, per Dewey and Grunig; mass, where a public is simply viewed as a population of individuals; agenda-building, where a public is viewed as a condition of political involvement that is not transitory; and "homo narrans", where a public is (in the words of Gabriel M. Vasquez, assistant professor in the School of Communication at the University of Houston) a collection of "individuals that develop a group consciousness around a problematic situation and act to solve the problematic situations" . Public schools are often under controversy for their "agenda-building," especially in debates over whether to teach a religious or secular curriculum. The promotion of an agenda is commonplace whenever one is in a public environment, but schools have exceptional power in that regard. One non-situational concept of a public is that of Kirk Hallahan, professor at Colorado State University, who defines a public as "a group of people who relate to an organization, who demonstrate varying degrees of activity—passivity, and who might (or might not) interact with others concerning their relationship with the organization". Samuel Mateus's 2011 paper "Public as Social Experience" considered to view the concept by an alternative point of view: the public "is neither a simple audience constituted by media consumers nor just a rational-critical agency of a Public Sphere". He argued "the concept should also be seen in the light of a publicness principle, beyond a critic and manipulative publicity (...). In accordance, the public may be regarded as the result of the social activities made by individuals sharing symbolic representations and common emotions in publicness. Seen with lower-case, the concept is a set of subjectivities who look publicly for a feeling of belonging. So, in this perspective, the public is still a fundamental notion to social life although in a different manner in comparison to 18th century Public Sphere's Public. He means above all the social textures and configurations where successive layers of social experience are built up." Social publics Social publics are groups of people united by common ideas, ideology, or hobbies. Networked publics are social publics which have been socially restructured by the networking of technologies. As such, they are simultaneously both (1) the space constructed through networked technologies and (2) the imagined collective which consequently emerges as a result of the intersection of human persons, shared technologies, and their practices. See also Community Nation People Public sphere Res publica Volk References Bibliography Further reading Hannay, Alastair (2005) On the Public Routledge Kierkegaard, Søren (2002) A Literary Review; Alastair Hannay (trans.) London: Penguin Lippmann, Walter. The Phantom Public (Library of Conservative Thought), Transaction Publishers; Reprint edition, January 1, 1993, . Mayhew, Leon H. The New Public: Professional Communication and the Means of Social Influence, (Cambridge Cultural Social Studies), Cambridge University Press, September 28, 1997, . Sennett, Richard. The Fall of Public Man. W. W. Norton & Company; Reissue edition, June 1992, . Human communication Public relations Sociological terminology Political science Marketing
0.762627
0.989065
0.754287
Multimethodology
Multimethodology or multimethod research includes the use of more than one method of data collection or research in a research study or set of related studies. Mixed methods research is more specific in that it includes the mixing of qualitative and quantitative data, methods, methodologies, and/or paradigms in a research study or set of related studies. One could argue that mixed methods research is a special case of multimethod research. Another applicable, but less often used label, for multi or mixed research is methodological pluralism. All of these approaches to professional and academic research emphasize that monomethod research can be improved through the use of multiple data sources, methods, research methodologies, perspectives, standpoints, and paradigms. The term multimethodology was used starting in the 1980s and in the 1989 book Multimethod Research: A Synthesis of Styles by John Brewer and Albert Hunter. During the 1990s and currently, the term mixed methods research has become more popular for this research movement in the behavioral, social, business, and health sciences. This pluralistic research approach has been gaining in popularity since the 1980s. Multi and mixed methods research designs There are four broad classes of research studies that are currently being labeled "mixed methods research": Quantitatively driven approaches/designs in which the research study is, at its core, a quantitative study with qualitative data/method added to supplement and improve the quantitative study by providing an added value and deeper, wider, and fuller or more complex answers to research questions; quantitative quality criteria are emphasized but high quality qualitative data also must be collected and analyzed; Qualitatively driven approaches/designs in which the research study is, at its core, a qualitative study with quantitative data/method added to supplement and improve the qualitative study by providing an added value and deeper, wider, and fuller or more complex answers to research questions; qualitative quality criteria are emphasized but high quality quantitative data also must be collected and analyzed; Interactive or equal status designs in which the research study equally emphasizes (interactively and through integration) quantitative and qualitative data, methods, methodologies, and paradigms. This third design is often done through the use of a team composed of an expert in quantitative research, an expert in qualitative research, and an expert in mixed methods research to help with dialogue and continual integration. In this type of mixed study, quantitative and qualitative and mixed methods quality criteria are emphasized. This use of multiple quality criteria is seen in the concept of multiple validities legitimation. Here is a definition of this important type of validity or legitimation: Multiple validities legitimation "refers to the extent to which the mixed methods researcher successfully addresses and resolves all relevant validity types, including the quantitative and qualitative validity types discussed earlier in this chapter as well as the mixed validity dimensions. In other words, the researcher must identify and address all of the relevant validity issues facing a particular research study. Successfully addressing the pertinent validity issues will help researchers produce the kinds of inferences and meta-inferences that should be made in mixed research"(Johnson & Johnson, 2014; page 311). Mixed priority designs in which the principal study results derive from the integration of qualitative and quantitative data during analysis. Desirability The case for multimethodology or mixed methods research as a strategy for intervention and/or research is based on four observations: Narrow views of the world are often misleading, so approaching a subject from different perspectives or paradigms may help to gain a holistic or more truthful worldview. There are different levels of social research (i.e.: biological, cognitive, social, etc.), and different methodologies may have particular strengths with respect to one of these levels. Using more than one should help to get a clearer picture of the social world and make for more adequate explanations. Many existing practices already combine methodologies to solve particular problems, yet they have not been theorised sufficiently. Multimethodology fits well with pragmatism. Feasibility There are also some hazards to multimethodological or mixed methods research approaches. Some of these problems include: Many paradigms are at odds with each other. However, once the understanding of the difference is present, it can be an advantage to see many sides, and possible solutions may present themselves. Multimethod and mixed method research can be undertaken from many paradigmatic perspectives, including pragmatism, dialectical pluralism, critical realism, and constructivism. Cultural issues affect world views and analyzability. Knowledge of a new paradigm is not enough to overcome potential biases; it must be learned through practice and experience. People have cognitive abilities that predispose them to particular paradigms. Quantitative research requires skills of data-analysis and several techniques of statistic reasoning, while qualitative research is rooted in in-depth observation, comparative thinking, interpretative skills and interpersonal ability. None of the approaches is easier to master than the other, and both require specific expertise, ability and skills. Pragmatism and mixed methods Pragmatism allows for the integration of qualitative and quantitative methods as loosely coupled systems to support mixed methods research. On the one hand, quantitative research is characterized by randomized controlled trials, research questions inspired by literature review gap, generalizability, validity, and reliability. On the other, qualitative research is characterized by socially constructed realities and lived experiences. Pragmatism reconciles these differences an integrates quantitative and qualitative research as loosely coupled systems, where "open systems interact with each other at the point of their boundaries." History of Pragmatism in Multi/Mixed Methods Research Developed as a philosophical method to solve problems towards the end of the nineteenth century, pragmatism is attributed to the work of philosopher Charles Sanders Peirce. For Peirce, research is conducted and interpreted from the eye of the beholder, as a practical approach to investigating social affairs. He sees science as a communal affair leading to single truths that are arrived at from multiple perspectives. For Peirce, the research conclusions are not as important as how these conclusions are reached. Focus is on answering the research question while allowing the methods to emerge in the process. Peirce pragmatism and its approach to research support qualitatively driven mixed methods studies. John Dewey extends both, "Peirce pragmatic method and (William) James' radical empiricism (and approach to experience) by application to social and political problems." His philosophical pragmatism takes an interdisciplinary approach, where the divide between quantitative and qualitative research represents an obstacle to solving a problem. In Dewey's pragmatism, success is measured by the outcome, where the outcome is the reason to engage in research. Live experiences constitute reality, were individual lived experiences form a continuum by the interaction of subjective (internal) and objective (external) conditions. In Dewey's continuum of experiences, no experience lives on its own, it is influenced by the experiences that preceded it, and influences those that will follow it. His approach to knowledge is open-minded, and inquire is central to his epistemology. Following Dewey, quantitatively driven research methods dominated until 1979, when Richard Rorty revived pragmatism. Rorty introduces his own ideas into pragmatism which includes the importance of culture, beliefs, and context. He shifts from understanding how things are to how they could be, and introduces the idea that "justification is audience dependent, and pretty much any justification finds a receptive audience" As Rorty explains, research success is peer dependent, not peer group neutral. From his perspective, MMR is not simply the merging of quantitative and qualitative research, but a third camp with its own peers and supporters. Pragmatic philosophical positions Multiple pragmatic philosophical stands may be used to justify pragmatism as a paradigm when conducting mixed methods research (MMR). A research paradigm provides a framework based on what constitutes and how knowledge is formed. Pragmatism as a philosophy may aid researchers in positioning themselves somewhere in the spectrum between qualitatively driven and quantitatively driven methods. The following philosophical stands can help address the debate between the use of qualitative and quantitative methods, and to ground quantitatively, qualitatively, or equal-status driven MMR. Radical empiricism Radical empiricism, as articulated by William James, takes reality as a function of our ongoing experiences, constantly changing at the individual level. James emphasizes that reality is not predetermined, and individual free will and chance matter. These ideas fit well with qualitative research emphasizing lived experiences. James also finds the truth in empirical and objectives facts, merging the divide between qualitative and quantitative research. However, James points out that no truth is independent of the thinker. James' brand of pragmatism may be used by researchers conducting qualitatively and equal-status driven MMR. Dialectical Pluralism Dialectical pluralism is a form of pragmatism that emphasizes intentionally drawing from multiple approaches to conducting research and developing knowledge. The multiple approaches being taken need not agree or converge with one another. Instead, the researcher using dialectical pluralism in the conduct of a mixed-method study may tack back and forth between models and perspectives in order to develop insight. Realism and Critical Realism Realists and critical realists take the perspective that the world exists independently of our observation and interpretation of it; critical realism goes beyond this to assert that multiple interpretations of the world are likely. Like dialectical pluralism, realist paradigms in the context of pragmatic multi/mixed-methods research emphasize the idea that multiple approaches to knowledge are expected and can be treated as complementary. In contrast to a more strict positivist approach, critical realism sees causality as embedded in the details of a situation and social processes that surround an event. Transformative-Emancipatory Transformative and emancipatory paradigms emphasize a commitment on the part of the researcher to social justice, as in critical race theory. Researchers conducting multi-method or mixed-methods research within this paradigm tend to orient to issues of "power, privilege, and inequity." In contrast to quantitative and qualitative methodologies One major similarity between mixed methodologies and qualitative and quantitative taken separately is that researchers need to maintain focus on the original purpose behind their methodological choices. A major difference between the two, however, is the way some authors differentiate the two, proposing that there is logic inherent in one that is different from the other. Creswell (2009) points out that in a quantitative study the researcher starts with a problem statement, moving on to the hypothesis and null hypothesis, through the instrumentation into a discussion of data collection, population, and data analysis. Creswell proposes that for a qualitative study the flow of logic begins with the purpose for the study, moves through the research questions discussed as data collected from a smaller group and then voices how they will be analysed. A research strategy is a procedure for achieving a particular intermediary research objective — such as sampling, data collection, or data analysis. We may therefore speak of sampling strategies or data analysis strategies. The use of multiple strategies to enhance construct validity (a form of methodological triangulation) is now routinely advocated by methodologists. In short, mixing or integrating research strategies (qualitative and/or quantitative) in any and all research undertaking is now considered a common feature of good research. A research approach refers to an integrated set of research principles and general procedural guidelines. Approaches are broad, holistic (but general) methodological guides or roadmaps that are associated with particular research motives or analytic interests. Two examples of analytic interests are population frequency distributions and prediction. Examples of research approaches include experiments, surveys, correlational studies, ethnographic research, and phenomenological inquiry. Each approach is ideally suited to addressing a particular analytic interest. For instance, experiments are ideally suited to addressing nomothetic explanations or probable cause; surveys — population frequency descriptions, correlations studies — predictions; ethnography — descriptions and interpretations of cultural processes; and phenomenology — descriptions of the essence of phenomena or lived experiences. In a single approach design (SAD)(also called a "monomethod design") only one analytic interest is pursued. In a mixed or multiple approach design (MAD) two or more analytic interests are pursued. Note: a multiple approach design may include entirely "quantitative" approaches such as combining a survey and an experiment; or entirely "qualitative" approaches such as combining an ethnographic and a phenomenological inquiry, and a mixed approach design includes a mixture of the above (e.g., a mixture of quantitative and qualitative data, methods, methodologies, and/or paradigms). A word of caution about the term "multimethodology". It has become quite common place to use the terms "method" and "methodology" as synonyms (as is the case with the above entry). However, there are convincing philosophical reasons for distinguishing the two. "Method" connotes a way of doing something — a procedure (such as a method of data collection). "Methodology" connotes a discourse about methods — i.e., a discourse about the adequacy and appropriateness of particular combination of research principles and procedures. The terms methodology and biology share a common suffix "logy." Just as bio-logy is a discourse about life — all kinds of life; so too, methodo-logy is a discourse about methods — all kinds of methods. It seems unproductive, therefore, to speak of multi-biologies or of multi-methodologies. It is very productive, however, to speak of multiple biological perspectives or of multiple methodological perspectives. See also Perestroika Movement (political science) Post-autistic economics Computer-assisted qualitative data analysis software References Further reading Andres, Lesley (2012). Designing and Doing Survey Research. London: Sage. Survey research from a mixed methods perspective. Brannen, Julia. 2005. "Mixing Methods: The Entry of Qualitative and Quantitative Approaches into the Research Process." International Journal of Social Research Methodology 8:173–184. Brewer, J., & Hunter, A. (2006). Foundations of Multimethod Research: Synthesizing Styles. Thousand Oaks, CA: Sage. Creamer, E. G. (2017). An Introduction To Fully Integrated Mixed Methods Research. Thousand Oaks, CA:Sage. Creswell, J. W., & Plano Clark, V. L. (2011). Designing and Conducting Mixed Methods Research. Los Angeles, CA: Sage. Curry, L. & Nunez-Smith M. (2014). Mixed Methods in Health Sciences Research: A Practical Primer. Thousand Oaks, CA: Sage Publications. Greene, J. C. (2007). Mixed Methods in Social Inquiry. San Francisco, CA: Jossey-Bass. Guest, G. (2013). Describing mixed methods research: An alternative to typologies. Journal of Mixed Methods Research, 7, 141–151. Hesse-Biber, S. (2010b). Emerging methodologies and methods practices in the field of mixed method research. Qualitative Inquiry, 16(6), 415–418. Hesse-Biber, Sharlene and R. Burke Johnson (2015). The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry. Oxford University Press. Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a Definition Mixed Methods Research. Journal of Mixed Methods Research, 1, 112–133. Lowenthal, P. R., & Leech, N. (2009). Mixed research and online learning: Strategies for improvement. In T. T. Kidd (Ed.), Online Education and Adult Learning: New Frontiers for Teaching Practices (pp. 202–211). Hershey, PA: IGI Global. Mingers J., Brocklesby J., "Multimethodology: Towards a Framework for Mixing Methodologies", Omega, Volume 25, Number 5, October 1997, pp. 489–509 (21) Morgan, D. L. (2014). Integrating Qualitative & Quantitative Methods: A Pragmatic Approach. Los Angeles, CA: Sage. Morse, J. M., & Niehaus, L. (2009). Mixed Methods Design: Principles and Procedures. Left Coast Press. Pepe, A. & Castelli, S. (2013) A cautionary tale on research methods in the field of parents in education. International Journal about Parents in Education, 7(1), pp 1–6. Onwuegbuzie, A. J., & Johnson, R. B. (2006). The "Validity" Issue in Mixed Methods Research. Research in the Schools, 13(1), 48–63. Onwuegbuzie, Anthony and Leech, Nancy (2005). "Taking the "Q" Out of Research: Teaching Research Methodology Courses Without the Divide Between Quantitative and Qualitative Paradigms." Quality and Quantity 39:267–296. Schram, Sanford F., and Brian Caterino, eds. (2006). Making Political Science Matter: Debating Knowledge, Research, and Method. New York: New York University Press. Teddlie, C., & Tashakkori, A. (2009). Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Thousand Oaks, CA: Sage. External links Mixed Methods Network for Behavioral, Social, and Health Sciences of Mixed Methods International Research Association Pluralism (philosophy) Research methods Pragmatism
0.769793
0.979857
0.754287
Sociotechnical system
Sociotechnical systems (STS) in organizational development is an approach to complex organizational work design that recognizes the interaction between people and technology in workplaces. The term also refers to coherent systems of human relations, technical objects, and cybernetic processes that inhere to large, complex infrastructures. Social society, and its constituent substructures, qualify as complex sociotechnical systems. The term sociotechnical systems was coined by Eric Trist, Ken Bamforth and Fred Emery, in the World War II era, based on their work with workers in English coal mines at the Tavistock Institute in London. Sociotechnical systems pertains to theory regarding the social aspects of people and society and technical aspects of organizational structure and processes. Here, technical does not necessarily imply material technology. The focus is on procedures and related knowledge, i.e. it refers to the ancient Greek term techne. "Technical" is a term used to refer to structure and a broader sense of technicalities. Sociotechnical refers to the interrelatedness of social and technical aspects of an organization or the society as a whole. Sociotechnical theory is about joint optimization, with a shared emphasis on achievement of both excellence in technical performance and quality in people's work lives. Sociotechnical theory, as distinct from sociotechnical systems, proposes a number of different ways of achieving joint optimization. They are usually based on designing different kinds of organization, according to which the functional output of different sociotechnical elements leads to system efficiency, productive sustainability, user satisfaction, and change management. Overview Sociotechnical refers to the interrelatedness of social and technical aspects of an organization. Sociotechnical theory is founded on two main principles: One is that the interaction of social and technical factors creates the conditions for successful (or unsuccessful) organizational performance. This interaction consists partly of linear "cause and effect" relationships (the relationships that are normally "designed") and partly from "non-linear", complex, even unpredictable relationships (the good or bad relationships that are often unexpected). Whether designed or not, both types of interaction occur when socio and technical elements are put to work. The corollary of this, and the second of the two main principles, is that optimization of each aspect alone (socio or technical) tends to increase not only the quantity of unpredictable, "un-designed" relationships, but those relationships that are injurious to the system's performance. Therefore, sociotechnical theory is about joint optimization, that is, designing the social system and technical system in tandem so that they work smoothly together. Sociotechnical theory, as distinct from sociotechnical systems, proposes a number of different ways of achieving joint optimization. They are usually based on designing different kinds of organization, ones in which the relationships between socio and technical elements lead to the emergence of productivity and wellbeing, rather than all too often case of new technology failing to meet the expectations of designers and users alike. The scientific literature shows terms like sociotechnical all one word, or socio-technical with a hyphen, sociotechnical theory, sociotechnical system and sociotechnical systems theory. All of these terms appear ubiquitously but their actual meanings often remain unclear. The key term "sociotechnical" is something of a buzzword and its varied usage can be unpicked. What can be said about it, though, is that it is most often used to simply, and quite correctly, describe any kind of organization that is composed of people and technology. The key elements of the STS approach include combining the human elements and the technical systems together to enable new possibilities for work and pave the way for technological change (Trist, 1981). The involvement of human elements in negotiations may cause a larger workload initially, but it is crucial that requirements can be determined and accommodated for prior to implementation as it is central to the systems success. Due to its mutual causality (Davis, 1977), the STS approach has become widely linked with autonomy, completeness and job satisfaction as both systems can work together to achieving a goal. Enid Mumford (1983) defines the socio-technical approach to recognize technology and people to ensure work systems are highly efficient and contain better characteristics which leads to higher job satisfaction for employees, resulting in a sense of fulfilment to improving quality of work and exceeding expectations. Mumford concludes that the development of information systems is not a technical issue, but a business organization issue which is concerned with the process of change. Principles Some of the central principles of sociotechnical theory were elaborated in a seminal paper by Eric Trist and Ken Bamforth in 1951. This is an interesting case study which, like most of the work in sociotechnical theory, is focused on a form of 'production system' expressive of the era and the contemporary technological systems it contained. The study was based on the paradoxical observation that despite improved technology, productivity was falling, and that despite better pay and amenities, absenteeism was increasing. This particular rational organisation had become irrational. The cause of the problem was hypothesized to be the adoption of a new form of production technology which had created the need for a bureaucratic form of organization (rather like classic command-and-control). In this specific example, technology brought with it a retrograde step in organizational design terms. The analysis that followed introduced the terms "socio" and "technical" and elaborated on many of the core principles that sociotechnical theory subsequently became. “The key elements of the STS approach include combining the human elements and the technical systems together to enable new possibilities for work and pave the way for technological change. Due to its mutual causality, the STS approach has become widely linked with autonomy, completeness and job satisfaction as both systems can work together to achieving a goal.” Responsible autonomy Sociotechnical theory was pioneering for its shift in emphasis, a shift towards considering teams or groups as the primary unit of analysis and not the individual. Sociotechnical theory pays particular attention to internal supervision and leadership at the level of the "group" and refers to it as "responsible autonomy". The overriding point seems to be that having the simple ability of individual team members being able to perform their function is not the only predictor of group effectiveness. There are a range of issues in team cohesion research, for example, that are answered by having the regulation and leadership internal to a group or team. These, and other factors, play an integral and parallel role in ensuring successful teamwork which sociotechnical theory exploits. The idea of semi-autonomous groups conveys a number of further advantages. Not least among these, especially in hazardous environments, is the often felt need on the part of people in the organisation for a role in a small primary group. It is argued that such a need arises in cases where the means for effective communication are often somewhat limited. As Carvalho states, this is because "...operators use verbal exchanges to produce continuous, redundant and recursive interactions to successfully construct and maintain individual and mutual awareness...". The immediacy and proximity of trusted team members makes it possible for this to occur. The coevolution of technology and organizations brings with it an expanding array of new possibilities for novel interaction. Responsible autonomy could become more distributed along with the team(s) themselves. The key to responsible autonomy seems to be to design an organization possessing the characteristics of small groups whilst preventing the "silo-thinking" and "stovepipe" neologisms of contemporary management theory. In order to preserve "...intact the loyalties on which the small group [depend]...the system as a whole [needs to contain] its bad in a way that [does] not destroy its good". In practice, this requires groups to be responsible for their own internal regulation and supervision, with the primary task of relating the group to the wider system falling explicitly to a group leader. This principle, therefore, describes a strategy for removing more traditional command hierarchies. Adaptability Carvajal states that "the rate at which uncertainty overwhelms an organisation is related more to its internal structure than to the amount of environmental uncertainty". Sitter in 1997 offered two solutions for organisations confronted, like the military, with an environment of increased (and increasing) complexity: "The first option is to restore the fit with the external complexity by an increasing internal complexity. ...This usually means the creation of more staff functions or the enlargement of staff-functions and/or the investment in vertical information systems". Vertical information systems are often confused for "network enabled capability" systems (NEC) but an important distinction needs to be made, which Sitter et al. propose as their second option: "...the organisation tries to deal with the external complexity by 'reducing' the internal control and coordination needs. ...This option might be called the strategy of 'simple organisations and complex jobs'". This all contributes to a number of unique advantages. Firstly is the issue of "human redundancy" in which "groups of this kind were free to set their own targets, so that aspiration levels with respect to production could be adjusted to the age and stamina of the individuals concerned". Human redundancy speaks towards the flexibility, ubiquity and pervasiveness of resources within NEC. The second issue is that of complexity. Complexity lies at the heart of many organisational contexts (there are numerous organizational paradigms that struggle to cope with it). Trist and Bamforth (1951) could have been writing about these with the following passage: "A very large variety of unfavourable and changing environmental conditions is encountered ... many of which are impossible to predict. Others, though predictable, are impossible to alter." Many type of organisations are clearly motivated by the appealing "industrial age", rational principles of "factory production", a particular approach to dealing with complexity: "In the factory a comparatively high degree of control can be exercised over the complex and moving "figure" of a production sequence, since it is possible to maintain the "ground" in a comparatively passive and constant state". On the other hand, many activities are constantly faced with the possibility of "untoward activity in the 'ground'" of the 'figure-ground' relationship" The central problem, one that appears to be at the nub of many problems that "classic" organisations have with complexity, is that "The instability of the 'ground' limits the applicability ... of methods derived from the factory". In Classic organisations, problems with the moving "figure" and moving "ground" often become magnified through a much larger social space, one in which there is a far greater extent of hierarchical task interdependence. For this reason, the semi-autonomous group, and its ability to make a much more fine grained response to the "ground" situation, can be regarded as "agile". Added to which, local problems that do arise need not propagate throughout the entire system (to affect the workload and quality of work of many others) because a complex organization doing simple tasks has been replaced by a simpler organization doing more complex tasks. The agility and internal regulation of the group allows problems to be solved locally without propagation through a larger social space, thus increasing tempo. Whole tasks Another concept in sociotechnical theory is the "whole task". A whole task "has the advantage of placing responsibility for the ... task squarely on the shoulders of a single, small, face-to-face group which experiences the entire cycle of operations within the compass of its membership." The Sociotechnical embodiment of this principle is the notion of minimal critical specification. This principle states that, "While it may be necessary to be quite precise about what has to be done, it is rarely necessary to be precise about how it is done". This is no more illustrated by the antithetical example of "working to rule" and the virtual collapse of any system that is subject to the intentional withdrawal of human adaptation to situations and contexts. The key factor in minimally critically specifying tasks is the responsible autonomy of the group to decide, based on local conditions, how best to undertake the task in a flexible adaptive manner. This principle is isomorphic with ideas like effects-based operations (EBO). EBO asks the question of what goal is it that we want to achieve, what objective is it that we need to reach rather than what tasks have to be undertaken, when and how. The EBO concept enables the managers to "...manipulate and decompose high level effects. They must then assign lesser effects as objectives for subordinates to achieve. The intention is that subordinates' actions will cumulatively achieve the overall effects desired". In other words, the focus shifts from being a scriptwriter for tasks to instead being a designer of behaviours. In some cases, this can make the task of the manager significantly less arduous. Meaningfulness of tasks Effects-based operations and the notion of a "whole task", combined with adaptability and responsible autonomy, have additional advantages for those at work in the organization. This is because "for each participant the task has total significance and dynamic closure" as well as the requirement to deploy a multiplicity of skills and to have the responsible autonomy in order to select when and how to do so. This is clearly hinting at a relaxation of the myriad of control mechanisms found in more classically designed organizations. Greater interdependence (through diffuse processes such as globalisation) also bring with them an issue of size, in which "the scale of a task transcends the limits of simple spatio-temporal structure. By this is meant conditions under which those concerned can complete a job in one place at one time, i.e., the situation of the face-to-face, or singular group". In other words, in classic organisations the "wholeness" of a task is often diminished by multiple group integration and spatiotemporal disintegration. The group based form of organization design proposed by sociotechnical theory combined with new technological possibilities (such as the internet) provide a response to this often forgotten issue, one that contributes significantly to joint optimisation. Topics Sociotechnical system A sociotechnical system is the term usually given to any instantiation of socio and technical elements engaged in goal directed behaviour. Sociotechnical systems are a particular expression of sociotechnical theory, although they are not necessarily one and the same thing. Sociotechnical systems theory is a mixture of sociotechnical theory, joint optimisation and so forth and general systems theory. The term sociotechnical system recognises that organizations have boundaries and that transactions occur within the system (and its sub-systems) and between the wider context and dynamics of the environment. It is an extension of Sociotechnical Theory which provides a richer descriptive and conceptual language for describing, analysing and designing organisations. A Sociotechnical System, therefore, often describes a 'thing' (an interlinked, systems based mixture of people, technology and their environment). Social technical means that technology, which by definition, should not be allowed to be the controlling factor when new work systems are implemented. So in order to be classified as 'Sociotechnical', equal attention must be paid to providing a high quality and satisfying work environment for employees. The Tavistock researchers, presented that employees who will be using the new and improved system, should be participating in determining the required quality of working life improvements. Participative socio‐technical design can be achieved by in‐depth interviews, questionnaires and collection of data. Participative socio-technical design can be conducted through in-depth interviews, the collection of statistics and the analysis of relevant documents. These will provide important comparative data that can help approve or disprove the chosen hypotheses. A common approach to participative design is, whenever possible, to use a democratically selected user design group as the key information collectors and decision makers. The design group is backed by a committee of senior staff who can lay the foundations and subsequently oversee the project. Alter describes sociotechnical analysis and design methods to not be a strong point in the information systems practice. The aim of socio-technical designs is to optimise and join both social and technical systems. However, the problem is that of the technical and social system along with the work system and joint optimisation are not defined as they should be. Sustainability Standalone, incremental improvements are not sufficient to address current, let alone future sustainability challenges. These challenges will require deep changes of sociotechnical systems. Theories on innovation systems; sustainable innovations; system thinking and design; and sustainability transitions, among others, have attempted to describe potential changes capable of shifting development towards more sustainable directions. Autonomous work teams Autonomous work teams also called self-managed teams, are an alternative to traditional assembly line methods. Rather than having a large number of employees each do a small operation to assemble a product, the employees are organized into small teams, each of which is responsible for assembling an entire product. These teams are self-managed, and are independent of one another. In the mid-1970s, Pehr Gyllenhammar created his new “dock assembly” work system at Volvo’s Kalmar Plant. Instead of the traditional flow line system of car production, self-managed teams would assemble the entire car. The idea of worker directors – a director on the company board who is a representative of the workforce – was established through this project and the Swedish government required them in state enterprises. Job enrichment Job enrichment in organizational development, human resources management, and organizational behavior, is the process of giving the employee a wider and higher level scope of responsibility with increased decision-making authority. This is the opposite of job enlargement, which simply would not involve greater authority. Instead, it will only have an increased number of duties. The concept of minimal critical specifications. (Mumford, 2006) states workers should be told what to do but not how to do it. Deciding this should be left to their initiative. She says they can be involved in work groups, matrices and networks. The employee should receive correct objectives but they decide how to achieve these objectives. Job enlargement Job enlargement means increasing the scope of a job through extending the range of its job duties and responsibilities. This contradicts the principles of specialisation and the division of labour whereby work is divided into small units, each of which is performed repetitively by an individual worker. Some motivational theories suggest that the boredom and alienation caused by the division of labour can actually cause efficiency to fall. Job rotation Job rotation is an approach to management development, where an individual is moved through a schedule of assignments designed to give him or her a breadth of exposure to the entire operation. Job rotation is also practiced to allow qualified employees to gain more insights into the processes of a company and to increase job satisfaction through job variation. The term job rotation can also mean the scheduled exchange of persons in offices, especially in public offices, prior to the end of incumbency or the legislative period. This has been practiced by the German green party for some time but has been discontinued. Motivation Motivation in psychology refers to the initiation, direction, intensity and persistence of behavior. Motivation is a temporal and dynamic state that should not be confused with personality or emotion. Motivation is having the desire and willingness to do something. A motivated person can be reaching for a long-term goal such as becoming a professional writer or a more short-term goal like learning how to spell a particular word. Personality invariably refers to more or less permanent characteristics of an individual's state of being (e.g., shy, extrovert, conscientious). As opposed to motivation, emotion refers to temporal states that do not immediately link to behavior (e.g., anger, grief, happiness). With the view that socio-technical design is by which intelligence and skill combined with emerging technologies could improve the work-life balance of employees, it is also believed that the aim is to achieve both a safer and more pleasurable workplace as well as to see greater democracy in society. The achievement of these aims would therefore lead to increased motivation of employees and would directly and positively influence their ability to express ideas. Enid Mumford's work on redesigning designing human systems also expressed that it is the role of the facilitator to “keep the members interested and motivated toward the design task, to help them resolve any conflicts”. Mumford states that although technology and organizational structures may change in industry, the employee rights and needs must be given a high priority. Future commercial success requires motivated work forces who are committed to their employers’ interests. This requires companies; managers who are dedicated to creating this motivation and recognize what is required for it to be achieved. Returning to socio-technical values, objectives; principals may provide an answer. Mumford reflects on leadership within organisations, because lack of leadership has proven to be the downfall of most companies. As competition increases employers have lost their valued and qualified employees to their competitors. Opportunities such as better job roles and an opportunity to work your way up has motivated these employees to join their rivals. Mumford suggests that a delegation of responsibility could help employees stay motivated as they would feel appreciated and belonging thus keeping them in their current organization. Leadership is key as employees would prefer following a structure and knowing that there is opportunity to improve. When Mumford analysed the role of user participation during two ES projects A drawback that was found was that users found it difficult to see beyond their current practices and found it difficult to anticipate how things can be done differently. Motivation was found to be another challenge during this process as users were not interested in participating (Wagner, 2007). Process improvement Process improvement in organizational development is a series of actions taken to identify, analyze and improve existing processes within an organization to meet new goals and objectives. These actions often follow a specific methodology or strategy to create successful results. Task analysis Task analysis is the analysis of how a task is accomplished, including a detailed description of both manual and mental activities, task and element durations, task frequency, task allocation, task complexity, environmental conditions, necessary clothing and equipment, and any other unique factors involved in or required for one or more people to perform a given task. This information can then be used for many purposes, such as personnel selection and training, tool or equipment design, procedure design (e.g., design of checklists or decision support systems) and automation. Job design Job design or work design in organizational development is the application of sociotechnical systems principles and techniques to the humanization of work, for example, through job enrichment. The aims of work design to improved job satisfaction, to improved through-put, to improved quality and to reduced employee problems, e.g., grievances, absenteeism. Deliberations Deliberations are key units of analysis in non-linear, knowledge work. They are 'choice points' that move knowledge work forward. As originated and defined by Cal Pava (1983) in a second-generation development of STS theory, deliberations are patterns of exchange and communication to reduce the equivocality of a problematic issue; for example, for systems engineering work, what features to develop in new software. Deliberations are not discrete decisions—they are a more continuous context for decisions. They have 3 aspects: topics, forums, and participants. Work System Theory (WST) and Work System Method (WSM) The WST and WSM simplifies the conceptualization of traditional complicated socio-technical system (STS) approach (Alter, 2015). Extending the prior research on STS which divides social and technical aspects; WST combines the two perspectives in a work system and outlines the framework for WSM which considers work system as the system of interest and proposes solutions accordingly (Alter, 2015).     The Work System Theory (WST) and Work System Method (WSM) are both forms of socio-technical systems but in the form of work systems. Also, the Work System Method encourages the use of both socio-technical ideas and values when it comes to IS development, use and implementation. Evolution of socio-technical systems The evolution of socio-technical design has seen its development from being approached as a social system exclusively. The realisation of the joint optimisation of social and technical systems was later realised. It was divided into sections where primary work which looks into principles and description, and how to incorporate technical designs on a macrosocial level. Benefits of seeing sociotechnical systems through a work system lens Analysing and designing sociotechnical systems from a work system perspective and eliminates the artificial distinction of the social system from the technical system. This also eliminates the idea of joint optimization. By using a work system lens in can bring many benefits, such as: Viewing the work system as a whole, making it easier to discuss and analyse More organised approach by even outlining basic understanding of a work system A readily usable analysis method making it more adaptable for performing analysis of a work system Does not require guidance by experts and researchers Reinforces the idea that a work system exists to produce a product(s)/service(s) Easier to theorize potential staff reductions, job roles changing and reorganizations Encourages motivation and good will while reducing the stress from monitoring Conscientious that documentation and practice may differ Problems to overcome Difference in cultures across the world Data theft of company information and networked systems "Big Brother" effect on employees Hierarchical imbalance between managers and lower staff Persuading peoples old attitude of 'instant fixes' without any real thought of structure Social network / structure The social network perspective first started in 1920 at Harvard University within the Sociology Department. Within information systems social networks have been used to study behaviour of teams, organisations and Industries. Social network perspective is useful for studying some of the emerging forms of social or organisational arrangements and the roles of ICT. Social media and Artificial Intelligence Recent work on Artificial Intelligence considers large Sociotechnical Systems, such as social networks and online marketplaces, as agents whose behaviour can be purposeful and adaptive. The behaviour of recommender systems can therefore be analysed in the language and framework of sociotechnical systems, leading also to a new perspective for their legal regulation. Multi-directional inheritance Multi-directional inheritance is the premise that work systems inherit their purpose, meaning and structure from the organisation and reflect the priorities and purposes of the organisation that encompasses them. Fundamentally, this premise includes crucial assumptions about sequencing, timescales, and precedence. The purpose, meaning and structure can derive from multiple contexts and once obtained it can be passed on to the sociotechnical systems that emerge throughout the organisation. Sociological perspective on sociotechnical systems A 1990s research interest in social dimensions of IS directed to relationship among IS development, uses, and resultant social and organizational changes offered fresh insight into the emerging role of ICT within differing organizational context; drawing directly on sociological theories of institution. This sociotechnical research has informed if not shaped IS scholarship. Sociological theories have offered a solid basis upon which emerging sociotechnical research built. ETHICS history The ETHICS (Effective Technical and Human Implementation of Computer Systems) process has been used successfully by Mumford in a variety of projects since its idea conception from the Turners Asbestos Cement project. After forgetting a vital request from the customer to discuss and potentially fix the issues found with the current organisation, she gave her advice on making a system. The system was not received well and Mumford was told they already had been using a similar system. This is when she realised a participative based approach would benefit many future projects. Enid Mumfords ETHICS development was a push from her to remind those in the field that research doesn't always need to be done on things of current interest and following the immediate trends over your current research is not always the way forward. A reminder that work should always be finished and we should never “write them off with no outcome.” as she said. See also References Further reading Kenyon B. De Greene (1973). Sociotechnical systems: factors in analysis, design, and management. Jose Luis Mate and Andres Silva (2005). Requirements Engineering for Sociotechnical Systems. Enid Mumford (1985). Sociotechnical Systems Design: Evolving Theory and Practice. William A. Pasmore and John J. Sherwood (1978). Sociotechnical Systems: A Sourcebook. William A. Pasmore (1988). Designing Effective Organizations: The Sociotechnical Systems Perspective. Pascal Salembier, Tahar Hakim Benchekroun (2002). Cooperation and Complexity in Sociotechnical Systems. Sawyer, S. and Jarrahi, M.H. (2014) The Sociotechnical Perspective: Information Systems and Information Technology, Volume 2 (Computing Handbook Set, Third Edition,) edited by Heikki Topi and Allen Tucker. Chapman and Hall/CRC. | http://sawyer.syr.edu/publications/2013/sociotechnical%20chapter.pdf James C. Taylor and David F. Felten (1993). Performance by Design: Sociotechnical Systems in North America. Eric Trist and H. Murray ed. (1993).The Social Engagement of Social Science, Volume II: The Socio-Technical Perspective. Philadelphia: University of Pennsylvania Press.http://www.moderntimesworkplace.com/archives/archives.html James T. Ziegenfuss (1983). Patients' Rights and Organizational Models: Sociotechnical Systems Research on mental health programs. Hongbin Zha (2006). Interactive Technologies and Sociotechnical Systems: 12th International Conference, VSMM 2006, Xi'an, China, October 18–20, 2006, Proceedings. Trist, E., & Labour, O. M. o. (1981). The evolution of socio-technical systems: A conceptual framework and an action research program: Ontario Ministry of Labour, Ontario Quality of Working Life Centre. Amelsvoort, P., & Mohr, B. (Co-Eds.) (2016). "Co-Creating Humane and Innovative Organizations: Evolutions in the Practice of Socio-Technical System Design": Global STS-D Network Press Pava, C., 1983. Managing New Office Technology. Free Press, New York, NY. External links JP Vos, The making of strategic realities : an application of the social systems theory of Niklas Luhmann, Technical University of Eindhoven, Department of Technology Management, 2002. STS Roundtable, an international not-for-profit association of professional and scholarly practitioners of Sociotechnical Systems Theory IEEE 1st Workshop on Socio-Technical Aspects of Mashups http://istheory.byu.edu/wiki/Socio-technical_theory http://www.moderntimesworkplace.com/archives/archives.html, Archived Vol I, II, & III of The Tavistock Anthology Philosophy of technology Social systems Sociological theories Systems psychology Systems theory Management cybernetics
0.761538
0.99046
0.754273
Developmental stage theories
In psychology, developmental stage theories are theories that divide psychological development into distinct stages which are characterized by qualitative differences in behavior. There are several different views about psychological and physical development and how they proceed throughout the life span. The two main psychological developmental theories include continuous and discontinuous development. In addition to individual differences in development, developmental psychologists generally agree that development occurs in an orderly way and in different areas simultaneously. Stage theories The development of the human mind is complex and a debated subject, and may take place in a continuous or discontinuous fashion. Continuous development, like the height of a child, is measurable and quantitative, while discontinuous development is qualitative, like hair or skin color, where those traits fall only under a few specific phenotypes. Continuous development involves gradual and ongoing changes throughout the life span, with behavior in the earlier stages of development providing the basis of skills and abilities required for the next stages. On the other hand, discontinuous development involves distinct and separate stages, with different kinds of behavior occurring in each stage. Stage theories of development rest on the assumption that development is a discontinuous process involving distinct stages which are characterized by qualitative differences in behavior. They also assume that the structure of the stage is not variable according to each individual; however the time of each stage may vary individually. While some theories focus primarily on the healthy development of children, others propose stages that are characterized by a maturity rarely reached before old age. Ego-psychology The psychosexual stage theory created by Sigmund Freud (b.1856) consists of five distinct stages of Psychosexual development that individuals will pass through for the duration of their lifespan. Four of these stages stretch from birth through puberty and the final stage continues throughout the remainder of life. Erik Erikson (b.1902) developed a psychosocial developmental theory, which was both influenced and built upon by Freud, which includes four childhood and four adult stages of life that capture the essence of personality during each period of development. Each of Erikson's stages include both a positive and negative influences that can go on to be seen later in an individual's life. His theory includes the influence of biological factors on development. Jane Loevinger (b.1918) built on the work of Erikson in her description of stages of ego development. Individuation and attachment in ego-psychology Margaret Mahler's (b.1897) theory of separation-individuation in child development contains three phases regarding the child's object relations. John Bowlby's (b.1907) attachment theory proposes that developmental needs and attachment in children are connected to particular people, places, and objects throughout our lives. These connections provide a behavior in the young child that is heavily affected and relied on throughout the entire lifespan. In case of maternal deprivation, this development may be disturbed. Robert Kegan (b.1946) provided a theory of the evolving self, which describes the constructive development theory of subject–object relations. Cognitive and moral development Cognitive development Piaget's cognitive development theory Jean Piaget's cognitive developmental theory describes four major stages from birth through puberty, the last of which starts at 12 years and has no terminating age: Sensorimotor: (birth to 2 years), Preoperations: (2 to 7 years), Concrete operations: (7 to 11 years), and Formal Operations: (from 12 years). Each stage has at least two substages, usually called early and fully. Piaget's theory is a structural stage theory, which implies that: Each stage is qualitatively different; it is a change in nature, not just quantity; Each stage lays the foundation for the next; Everyone goes through the stages in the same order. Neo-Piagetian theories Neo-Piagetian theories criticize and build on Piaget's work. Juan Pascaual-Leone was the first to propose a neo-Piagetian stage theory. Since that time several neo-Piagetian theories of cognitive development have been proposed. These include the theories of Robbie Case, Grame Halford, Andreas Demetriou and Kurt W. Fischer. The theory of Michael Commons' model of hierarchical complexity is also relevant. The description of stages in these theories is more elaborate and focuses on underlying mechanisms of information processing rather than on reasoning as such. In fact, development in information processing capacity is invoked to explain the development of reasoning. More stages are described (as many as 15 stages), with 4 being added beyond the stage of Formal operations. Most stage sequences map onto one another. Post-Piagetian stages are free of content and context and are therefore very general. Other related theories Lawrence Kohlberg (b.1927) in his stages of moral development described how individuals developed moral reasoning. Kohlberg agreed with Piaget's theory of moral development that moral understanding is linked to cognitive development. His three levels were categorized as: preconventional, conventional, and postconventional, all of which have two sub-stages. James W. Fowler (b.1940), and his stages of faith development theory, builds off of both Piaget's and Kohlberg's schemes. Learning and education Maria Montessori (b.1871) described a number of stages in her educational philosophy. Albert Bandura (b.1925), in his social learning theory, emphasizes the child's experiential learning from the environment. Spirituality and consultancy Inspired by Theosophy, Rudolf Steiner (b.1861) had developed a stage theory based on seven-year life phases. Three childhood phases (conception to 21 years) are followed by three stages of development of the ego (21–42 years), concluding with three stages of spiritual development (42-63). The theory is applied in Waldorf education Clare W. Graves (b.1914) developed an emergent cyclical levels of existence theory. It was popularized by Don Beck (b.1937) and Chris Cowan's as spiral dynamics, and mainly applied in consultancy. Ken Wilber (b.1949) integrated Spiral Dynamics in his integral theory, which also includes psychological stages of development as described by Jean Piaget and Jane Loevinger, the spiritual models of Sri Aurobindo and Rudolf Steiner, and Jean Gebsers theory of mutations of consciousness in human history. Other theories Lev Vygotsky (b.1896) developed several theories, particularly zone of proximal development. Other theories are not exactly developmental stage theories, but do incorporate a hierarchy of psychological factors and elements. Abraham Maslow (b.1908) described a hierarchy of needs. James Marcia (b.1937) developed a theory of identity achievement and identity status. References
0.766586
0.98392
0.754259
Group work
Group work is a form of voluntary association of members benefiting from cooperative learning, that enhances the total output of the activity than when done individually. It aims to cater for individual differences, and develop skills such as communication skills, collaborative skills, critical thinking skills, etc. It is also meant to develop generic knowledge and socially acceptable attitudes. Through group work, a "group mind" - conforming to standards of behavior and judgement - can be fostered. Specifically in psychotherapy and social work, "group work" refers to group therapy, offered by a practitioner trained in psychotherapy, psychoanalysis, counseling or other relevant disciplines. Social group work Social group work is a method of social work that enhance people's social functioning through purposeful group experiences, and to cope more effectively with personal, group or community problems (Marjorie Murphy, 1959). Social group work is a primary modality of social work in bringing about positive change. It is defined as an educational process emphasizing the development and social adjustment of an individual through voluntary association and use of this association as a means of furthering socially desirable ends. It is a psychosocial process which is concerned in developing leadership and cooperation with building on the interests of the group for a social purpose. Social group work is a method through which individuals in groups in a social agency setting are helped by a worker who guides their interaction through group activities so they may relate to others and experience growth opportunities in line with their needs and capacities of the individual, group and community development. It aims at the development of persons through the interplay of personalities in a group setting and at the creation of such group setting as provide for integrated, cooperative group action for common ends. It is also a process and a method through which group life is affected by a worker who consciously directs the interacting process towards the accomplishment of goals which are conceived in a democratic frame of reference. Its distinct characteristics lies in the fact that group work is used with group experience as a means of individual growth and development, and that the group worker is concerned in developing social responsibility and active citizenship for the improvement of democratic societies. Group work is a way to serving an individual within and through small face to face groups in order to bring about the desired change among client participants. Models There are four models in social group work: Remedial model (Vinter, R. D., 1967) – Remedial model focuses on the individuals dysfunction and utilizes the group as a context and means for altering deviant behaviour. Reciprocal or Mediating model (W. Schwartz, 1961) - A model based on open systems theory, humanistic psychology and existential perspective. Relationship rooted in reciprocal transactions and intensive commitment is considered critical in this model. Developmental model (Berustein, S. & Lowy, 1965) - A model based on Erikson's ego psychology, group dynamics and conflict theory. In this model groups are seen as having "a degree of independence and autonomy, but the dynamics of to and fro flow between them and their members, between them and their social settings, are considered crucial to their existence, viability and achievements". The connectedness (intimacy and closeness) is considered critical in this model. Social goals model (Gisela Konopka & Weince, 1964) - A model based on 'programming' social consciousness, social responsibility, and social change. It suggests that democratic participation with others in a group situation can promote enhancement of personal function in individuals, which in-turn can affect social change. It results in heightened self-esteem and a rise in social power for the members of the group collectively and as individuals. See also Social case work Further reading Douglas, Tom (1976), Group Work Practice, International Universities Press, New York. Konopka, G. (1963), Social Group Work : A Helping Process, Prentice Hall, Englewood Cliffs. Treeker, H.B. (1955), Social Group Work, Principles and Practices, Whiteside, New York. Phillips, Helen, U. (1957), Essential of Social Group Work Skill, Association Press, New York. References Harleigh B. Trecker, Social Group Work: Principles and Practices, Association Press, 1972 Joan Benjamin, Judith Bessant and Rob Watts. Making Groups Work: Rethinking Practice, Allen & Unwin, 1997 Ellen Sarkisian, "Working in Groups." Working in Groups - A Quick Guide for Students, Derek Bok Center, Harvard University Group psychotherapy Group processes Social work
0.770258
0.979211
0.754245
Laban movement analysis
Laban movement analysis (LMA), sometimes Laban/Bartenieff movement analysis, is a method and language for describing, visualizing, interpreting and documenting human movement. It is based on the original work of Rudolf Laban, which was developed and extended by Lisa Ullmann, Irmgard Bartenieff, Warren Lamb and others. LMA draws from multiple fields including anatomy, kinesiology and psychology. It is used by dancers, actors, musicians and athletes; by health professionals such as physical and occupational therapists and psychotherapists; and in anthropology, business consulting and leadership development. Labanotation (or Kinetography Laban), a notation system for recording and analyzing movement, is used in LMA, but Labanotation is a separate system. Categories of movement Laban movement analysis is contemporarily categorised in various way. Originally, these categories were very basic and Laban himself referred mostly to Eukinetics - which is his effort studies - and Choreutics - which is Spatial Harmony theory. His student Irmgard Bartenieff later further elaborated these categories in four - Body, Effort, Shape and Space - and this system, known as BESS is commonly taught today. However, BESS is not the only organisation of Laban's theory in use. In the U.K. for example, more influenced by Lisa Ullmann, another student of Laban, the categories are Body, Effort, Space and Relationship with Shape being interwoven into Body, Space and Relationship. The categories of BESS are as follows: Body - what the body is doing and the interrelationships within the body Effort - the qualities of movement Shape - how the body is changing shape and what motivates it to do so Space - where the body is moving and the harmonic relationships in space Other categories, that are occasionally mentioned in some literature, are relationship and phrasing. These are less well defined. Relationship is the interaction between people, body parts or a person and an object. Phrasing is defined as being the personal expression of a movement. These categories are in turn occasionally divided into kinematic and non-kinematic categories to distinguish which categories relate to changes to body relations over time and space. Body The body category describes structural and physical characteristics of the human body while moving. This category is responsible for describing which body parts are moving, which parts are connected, which parts are influenced by others, and general statements about body organization. Several subcategories of body are: Initiation of movement starting from specific bodies; Connection of different bodies to each other; Sequencing of movement between parts of the body; and Patterns of body organization and connectivity, called "patterns of total body square connectivity", "developmental hyper movement patterns", or "neuromuscular shape-shifting patterns". Effort Effort, or what Laban sometimes described as dynamics, is a system for understanding the more subtle characteristics about movement with respect to inner intention. The difference between punching someone in anger and reaching for a glass is slight in terms of body organization – both rely on extension of the arm. The attention to the strength of the movement, the control of the movement and the timing of the movement are very different. Effort has four subcategories (effort factors), each of which has two opposite polarities (Effort elements). Laban named the combination of the first three categories (Space, Weight, and Time) the Effort Actions, or Action Drive. The eight combinations are descriptively named Float, Punch (Thrust), Glide, Slash, Dab, Wring, Flick, and Press. The Action Efforts have been used extensively in some acting schools, including ALRA, Manchester School of Theatre, LIPA and London College of Music to train in the ability to change quickly between physical manifestations of emotion. Flow, on the other hand, is responsible for the continuousness or ongoingness of motions. Without any Flow Effort, movement must be contained in a single initiation and action, which is why there are specific names for the Flow-less Action configurations of Effort. In general it is very difficult to remove Flow from much movement, and so a full analysis of Effort will typically need to go beyond the Effort Actions. Combinations of Efforts While the individual motion factors of Space, Time, Weight and Flow may be observed, usually they will appear in combinations. Combinations of 3 Motion Factors are known as drives. The drives are: The Action Drive - where Weight, Space and Time are present but Flow is missing The Passion Drive - where Weight, Time and Flow are present but Space is missing The Spell Drive - where Weight, Space and Flow are present but Time is missing The Vision Drive - where Space, Time and Flow are present but Weight is missing Alongside the drives, combinations of two efforts are known as states. The states are known as: Awake - combining Space and Time Dreamlike - combining Weight and Flow Distant - combining Space and Flow Near/Rhythm - combining Time and Weight Stabile - combining Space and Weight Labile/Mobile - combining Time and Flow Full effort, where all 4 motion factors are equally expressed, is usually considered to be a rare and usually momentary occurrence. The states and drives are often discussed as having distinct psychological characteristics. Shape While the Body category primarily develops connections within the body and the body/space intent, the way the body changes shape during movement is further experienced and analyzed through the Shape category. It is important to remember that all categories are related, and Shape is often an integrating factor for combining the categories into meaningful movement. There are several subcategories in Shape: "Shape Forms" describe static shapes that the body takes, such as Wall-like, Ball-like, and Pin-like. "Modes of Shape Change" describe the way the body is interacting with and the relationship the body has to the environment. There are three Modes of Shape Change: Shape Flow: Representing a relationship of the body to itself. Essentially a stream of consciousness expressed through movement, this could be amoebic movement or could be mundane habitual actions, like shrugging, shivering, rubbing an injured shoulder, etc. Directional: Representing a relationship where the body is directed toward some part of the environment. It is divided further into Spoke-like (punching, pointing, etc.) and Arc-like (swinging a tennis racket, painting a fence) Carving: Representing a relationship where the body is actively and three dimensionally interacting with the volume of the environment. Examples include kneading bread dough, wringing out a towel, avoiding laser-beams or miming the shape of an imaginary object. In some cases, and historically, this is referred to as Shaping, though many practitioners feel that all three Modes of Shape Change are "shaping" in some way, and that the term is thus ambiguous and overloaded. "Shape Qualities" describe the way the body is changing (in an active way) toward some point in space. In the simplest form, this describes whether the body is currently Opening (growing larger with more extension) or Closing (growing smaller with more flexion). There are more specific terms – Rising, Sinking, Spreading, Enclosing, Advancing, and Retreating, which refer to specific dimensions of spatial orientations. "Shape Flow Support" describes the way the torso (primarily) can change in shape to support movements in the rest of the body. It is often referred to as something which is present or absent, though there are more refined descriptors. Space One of Laban's primary contributions to Laban Movement Analysis (LMA) are his theories of Space. This category involves motion in connection with the environment, and with spatial patterns, pathways, and lines of spatial tension. Laban described a complex system of geometry based on crystalline forms, Platonic solids, and the structure of the human body. He felt that there were ways of organizing and moving in space that were specifically harmonious, in the same sense as music can be harmonious. Some combinations and organizations were more theoretically and aesthetically pleasing. As with music, Space Harmony sometimes takes the form of set 'scales' of movement within geometric forms. These scales can be practiced in order to refine the range of movement and reveal individual movement preferences. The abstract and theoretical depth of this part of the system is often considered to be much greater than the rest of the system. In practical terms, there is much of the Space category that does not specifically contribute to the ideas of Space Harmony. This category also describes and notates choices which refer specifically to space, paying attention to: Kinesphere: the area that the body is moving within and how the mover is paying attention to it. Spatial Intention: the directions or points in space that the mover is identifying or using. Geometrical observations of where the movement is being done, in terms of emphasis of directions, places in space, planar movement, etc. The Space category is currently under continuing development, more so since exploration of non-Euclidean geometry and physics has evolved. Use in human–computer interaction LMA is used in Human-Computer Interaction as a means of extracting useful features from a human's movement to be understood by a computer, as well as generating realistic movement animation for virtual agents and robots. Formal study Laban movement analysis practitioners and educators who studied at LIMS, an accredited institutional member of the National Association of Schools of Dance (NASD), are known as "Certified Movement Analysts" (CMAs). Laban/Bartenieff and Somatic Studies International™ (LSSI), is an approved training program of ISMETA, and offers Movement Analysis and Somatic Practice training, which qualifies “Certified Movement Analysts & Somatic Practitioners” (CMA-SPs). Other courses offer LMA studies, including Integrated Movement Studies, which qualifies "Certified Laban/Bartenieff Movement Analysts" (CLMAs). The Laban Guild, set up by Rudolf Laban in the UK, offers courses in Laban Movement Analysis and Labannotation and is responsible for preserving and developing the work in the U.K. Trinity Laban Conservatoire of Music and Dance offer a 3-year post-graduate diploma in Choreological studies and Valerie Preston-Dunlop is a course director. See also Benesh Movement Notation Choreography Dance notation Dance Notation Bureau Laban notation symbols Motif description References Further reading Bartenieff, Irmgard, and Dori Lewis (1980). Body Movement; Coping with the Environment. New York: Gordon and Breach. Dell, C., A Primer for Movement Description Using Effort/Shape, Dance Notation Bureau, New York, 1975. Hackney, Peggy (1998) Making Connections: Total Body Integration through Bartenieff Fundamentals, Routledge Publishers, New York. Lamb, Warren (1965). Posture and Gesture; An Introduction to the Study of Physical Behaviour. London: Gerald Duckworth. Lamb, Warren, and Watson, E. (1979). Body Code; The Meaning in Movement. London: Routledge & Kegan Paul. Moore, Carol Lynne (1982). Executives in Action: A Guide to Balanced Decision–making in Management. Estover, Plymouth: MacDonald & Evans. (First published as Action Profiling, 1978.) Moore, Carol Lynne and Kaoru Yamamoto (1988). Beyond Words. New York: Gordon and Breach. Newlove, J. & Dalby, J. (2005) Laban for All, Nick Hern Books, London. External links Laban/Bartenieff & Somatic Studies Canada (LSSC) Laban/Bartenieff & Somatic Studies International (LSSI) Laban Analyses.org Laban analysis and Labanotation searchable database What is LMA? (Glossary) NYU Movement Lab: Intro to LMA Dance research Somatics da:Labanotation de:Labanotation fr:Notation Laban it:Notazione Laban he:ניתוח תנועה לאבאן
0.760918
0.991217
0.754235
Phallogocentrism
In critical theory and deconstruction, phallogocentrism is a neologism coined by Jacques Derrida to refer to the privileging of the masculine (phallus) in the construction of meaning. The term is a blend word of the older terms phallocentrism (focusing on the masculine point of view) and logocentrism (focusing on language in assigning meaning to the world). Derrida and others identified phonocentrism, or the prioritizing of speech over writing, as an integral part of phallogocentrism. Derrida explored this idea in his essay "Plato's Pharmacy". Background In contemporary literary and philosophical works concerned with gender, the term "phallogocentrism" is commonplace largely as a result of the writings of Jacques Derrida, the founder of the philosophy of deconstruction, which is considered by many academics to constitute an essential part of the discourse of postmodernism. Deconstruction is a philosophy of "indeterminateness" and its opposing philosophy, "determinateness". According to deconstruction, indeterminate knowledge is "aporetic", i.e., based on contradictory facts or ideas ("aporias") that make it impossible to determine matters of truth with any degree of certitude; determinate knowledge, on the other hand, is "apodictic", i.e., based on facts or ideas that are considered to be "true", from one perspective or another. The phallogocentric argument is premised on the claim that modern Western culture has been, and continues to be, both culturally and intellectually subjugated by "logocentrism" and "phallocentrism". Logocentrism is the term Derrida uses to refer to the philosophy of determinateness, while phallocentrism is the term he uses to describe the way logocentrism itself has been genderized by a "masculinist (phallic)" and "patriarchal" agenda. Hence, Derrida intentionally merges the two terms phallocentrism and logocentrism as "phallogocentrism". The French feminist thinkers of the school of écriture féminine also share Derrida's phallogocentric reading of 'all of Western metaphysics'. For example, decry the "dual, hierarchical oppositions" set up by the traditional phallogocentric philosophy of determinateness, wherein "death is always at work" as "the premise of woman's abasement", woman who has been "colonized" by phallogocentric thinking. According to Cixous & Clément, the 'crumbling' of this way of thinking will take place through a Derridean-inspired, anti-phallo / logocentric philosophy of indeterminateness. Critique Swedish cyberphilosophy authors Alexander Bard and Jan Söderqvist propose a critique of Derrida's interpretation of phallogocentrism in their works. They advocate a return to phallic vision as fundamental and necessary for western civilization after 1945. They regard this phallic return as materialized through technology rather than through ever more academic discourse. In response to Derrida et al., Bard & Söderqvist propose that the phallogocentric project – which they call eventology – rather needs to be complemented with a return to nomadology, or the myth of the eternal return of the same, a matrichal renaissance which they claim has already materialized in system theory and complexity theory, from which both feminism and androgynism are merely later but welcome effects. According to Bard & Söderqvist it is merely the centrism and not the phallogos in itself which has ever been problematic. French philosopher Catherine Malabou, part-time collaborator with Derrida himself, has taken a similar constructive critical approach to the idea of phallogocentrism, for example Going into dialogue with psychoanalytic masters like Sigmund Freud, Jacques Lacan and most recently Alain Badiou – to whose philosophy of the event, Malabou responds with a radical traumatology firmly rooted in the neurosciences – her take is simply that psychoanalysis is inadequate to respond to the challenges she forwards due to its phallogocentrist fixation, a dilemma she believes the neurosciences are better fit to solve. The name of her solution to this problem is plasticity. See also Phallic monism References External links Deconstruction Feminist terminology Feminist theory Theories of language Critical theory Postmodern feminism Neologisms Jacques Derrida he:לוגוצנטריות
0.765594
0.985114
0.754197
Conceptual schema
A conceptual schema or conceptual data model is a high-level description of informational needs underlying the design of a database. It typically includes only the core concepts and the main relationships among them. This is a high-level model with insufficient detail to build a complete, functional database. It describes the structure of the whole database for a group of users. The conceptual model is also known as the data model that can be used to describe the conceptual schema when a database system is implemented. It hides the internal details of physical storage and targets the description of entities, datatypes, relationships and constraints. Overview A conceptual schema is a map of concepts and their relationships used for databases. This describes the semantics of an organization and represents a series of assertions about its nature. Specifically, it describes the things of significance to an organization (entity classes), about which it is inclined to collect information, and their characteristics (attributes) and the associations between pairs of those things of significance (relationships). Because a conceptual schema represents the semantics of an organization, and not a database design, it may exist on various levels of abstraction. The original ANSI four-schema architecture began with the set of external schemata that each represents one person's view of the world around him or her. These are consolidated into a single conceptual schema that is the superset of all of those external views. A data model can be as concrete as each person's perspective, but this tends to make it inflexible. If that person's world changes, the model must change. Conceptual data models take a more abstract perspective, identifying the fundamental things, of which the things an individual deals with are just examples. The model does allow for what is called inheritance in object oriented terms. The set of instances of an entity class may be subdivided into entity classes in their own right. Thus, each instance of a sub-type entity class is also an instance of the entity class's super-type. Each instance of the super-type entity class, then is also an instance of one of the sub-type entity classes. Super-type/sub-type relationships may be exclusive or not. A methodology may require that each instance of a super-type may only be an instance of one sub-type. Similarly, a super-type/sub-type relationship may be exhaustive or not. It is exhaustive if the methodology requires that each instance of a super-type must be an instance of a sub-type. A sub-type named "Other" is often necessary. Example relationships Each PERSON may be the vendor in one or more ORDERS. Each ORDER must be from one and only one PERSON. PERSON is a sub-type of PARTY. (Meaning that every instance of PERSON is also an instance of PARTY.) Each EMPLOYEE may have a supervisor who is also an EMPLOYEE. Data structure diagram A data structure diagram (DSD) is a data model or diagram used to describe conceptual data models by providing graphical notations which document entities and their relationships, and the constraints that bind them. See also References Further reading Perez, Sandra K., & Anthony K. Sarris, eds. (1995) Technical Report for IRDS Conceptual Schema, Part 1: Conceptual Schema for IRDS, Part 2: Modeling Language Analysis, X3/TR-14:1995, American National Standards Institute, New York, NY. Halpin T, Morgan T (2008) Information Modeling and Relational Databases, 2nd edn., San Francisco, CA: Morgan Kaufmann. External links A different point of view, as described by the agile community Data modeling Conceptual modelling
0.767591
0.982445
0.754117
Generalizability theory
Generalizability theory, or G theory, is a statistical framework for conceptualizing, investigating, and designing reliable observations. It is used to determine the reliability (i.e., reproducibility) of measurements under specific conditions. It is particularly useful for assessing the reliability of performance assessments. It was originally introduced by Lee Cronbach, N. Rajaratnam, and Goldine Gleser in 1963. Overview In G theory, sources of variation are referred to as facets. Facets are similar to the "factors" used in analysis of variance, and may include persons, raters, items/forms, time, and settings among other possibilities. These facets are potential sources of error and the purpose of generalizability theory is to quantify the amount of error caused by each facet and interaction of facets. The usefulness of data gained from a G study is crucially dependent on the design of the study. Therefore, the researcher must carefully consider the ways in which he/she hopes to generalize any specific results. Is it important to generalize from one setting to a larger number of settings? From one rater to a larger number of raters? From one set of items to a larger set of items? The answers to these questions will vary from one researcher to the next, and will drive the design of a G study in different ways. In addition to deciding which facets the researcher generally wishes to examine, it is necessary to determine which facet will serve as the object of measurement (e.g. the systematic source of variance) for the purpose of analysis. The remaining facets of interest are then considered to be sources of measurement error. In most cases, the object of measurement will be the person to whom a number/score is assigned. In other cases it may be a group or performers such as a team or classroom. Ideally, nearly all of the measured variance will be attributed to the object of measurement (e.g. individual differences), with only a negligible amount of variance attributed to the remaining facets (e.g., rater, time, setting). The results from a G study can also be used to inform a decision, or D, study. In a D study, we can ask the hypothetical question of "what would happen if different aspects of this study were altered?" For example, a soft drink company might be interested in assessing the quality of a new product through use of a consumer rating scale. By employing a D study, it would be possible to estimate how the consistency of quality ratings would change if consumers were asked 10 questions instead of 2, or if 1,000 consumers rated the soft drink instead of 100. By employing simulated D studies, it is therefore possible to examine how the generalizability coefficients (similar to reliability coefficients in Classical test theory) would change under different circumstances, and consequently determine the ideal conditions under which our measurements would be the most reliable. Comparison with classical test theory The focus of classical test theory (CTT) is on determining error of the measurement. Perhaps the most famous model of CTT is the equation , where X is the observed score, T is the true score, and e is the error involved in measurement. Although e could represent many different types of error, such as rater or instrument error, CTT only allows us to estimate one type of error at a time. Essentially it throws all sources of error into one error term. This may be suitable in the context of highly controlled laboratory conditions, but variance is a part of everyday life. In field research, for example, it is unrealistic to expect that the conditions of measurement will remain constant. Generalizability theory acknowledges and allows for variability in assessment conditions that may affect measurements. The advantage of G theory lies in the fact that researchers can estimate what proportion of the total variance in the results is due to the individual factors that often vary in assessment, such as setting, time, items, and raters. Another important difference between CTT and G theory is that the latter approach takes into account how the consistency of outcomes may change if a measure is used to make absolute versus relative decisions. An example of an absolute, or criterion-referenced, decision would be when an individual's test score is compared to a cut-off score to determine eligibility or diagnosis (i.e. a child's score on an achievement test is used to determine eligibility for a gifted program). In contrast, an example of a relative, or norm-referenced, decision would be when the individual's test score is used to either (a) determine relative standing as compared to his/her peers (i.e. a child's score on a reading subtest is used to determine which reading group he/she is placed in), or (b) make intra-individual comparisons (i.e. comparing previous versus current performance within the same individual). The type of decision that the researcher is interested in will determine which formula should be used to calculate the generalizability coefficient (similar to a reliability coefficient in CTT). See also Item Response Theory References Brennan, R. L. (2001). Generalizability Theory. New York: Springer-Verlag. Chiu, C.W.C. (2001). Scoring performance assessments based on judgements: generalizability theory. New York: Kluwer. Crocker, L., & Algina, J. (1986). Introduction to Classical and Modern Test Theory. New York: Harcourt Brace. Cronbach, L.J., Gleser, G.C., Nanda, H., & Rajaratnam, N. (1972). The dependability of behavioral measurements: Theory of generalizability for scores and profiles. New York: John Wiley. Cronbach, L.J., Nageswari, R., & Gleser, G.C. (1963). Theory of generalizability: A liberalization of reliability theory. The British Journal of Statistical Psychology, 16, 137-163. Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428. doi: 10.1037/0033-2909.86.2.420 Shavelson, R.J., & Webb, N.M. (1991). Generalizability Theory: A Primer. Thousand Oaks, CA: Sage. External links Georg E. Matt, Generalizability Theory Rasch-based Generalizability Theory Ralph Bloch, G_String Software Statistical theory
0.776239
0.971451
0.754077
Social shaping of technology
According to Robin A. Williams and David Edge (1996), "Central to social shaping of technology (SST) is the concept that there are choices (though not necessarily conscious choices) inherent in both the design of individual artifacts and systems, and in the direction or trajectory of innovation programs." If technology does not emerge from the unfolding of a predetermined logic or a single determinant, then innovation is a 'garden of forking paths'. Different routes are available, potentially leading to different technological outcomes. Significantly, these choices could have differing implications for society and for particular social groups. SST is one of the models of the technology: society relationship which emerged in the 1980s with MacKenzie and Wajcman's influential 1985 collection, alongside Pinch and Bijker's social construction of technology framework and Callon and Latour's actor-network theory. These have a common feature of criticism of the linear model of innovation and technological determinism. It differs from these notably in the attention it pays to the influence of the social and technological context of development which shapes innovation choices. SST is concerned to explore the material consequences of different technical choices, but criticizes technological determinism, which argues that technology follows its own developmental path, outside of human influences, and in turn, influences society. In this way, social shaping theorists conceive the relationship between technology and society as one of 'mutual shaping'. Some versions of this theory state that technology affects society by affordances, constraints, preconditions, and unintended consequences (Baym, 2015). Affordance is the idea that technology makes specific tasks easier in our lives, while constraints make tasks harder to complete. The preconditions of technology are the skills and resources that are vital to using technology to its fullest potential. Finally, the unintended consequences of technology are unanticipated effects and impact of technology. The cell phone is an example of the social shaping of technology (Zulto 2009). The cell phone has evolved over the years to make our lives easier by providing people with handheld computers that can answer calls, answer emails, search for information, and complete numerous other tasks (Zulto, 2009). Yet it has constraints for those that are not technologically savvy, hindering many people in society who do not understand how to utilize these devices. There are preconditions, such as monthly bills and access to electricity. There are also many unintended consequences such as the unintended distraction they cause for many people. Not only does technology affect society, but according to SST, society affects technology by way of economics, politics, and culture (Baym, 2015). For instance, cell phones have spread in poor countries due to cell phones being more affordable than a computer and internet service (economics), government regulations which have made it fairly easy for cell phone providers to build networks (politics), and the small size of cell phones which fit easily into many cultures’ need for mobile communication (culture). Names associated with this field Donald A. MacKenzie, Judy Wajcman, Bruno Latour, Wiebe Bijker, Thomas P. Hughes, John Law, Trevor Pinch (also Trevor J. Pinch), Michel Callon, Steve Woolgar, Carl May, Thomas J. Misa, Boelie Elzen, Robin Williams (academic), Ronald R. Kline, Marlei Pozzebon, and Osman Sadeck See also Normalization process theory (NPT) Science and technology studies Science studies Social construction of technology (SCOT) Technological revolution Technology and society References Donald Mackenzie and Judy Wajcman, editors. The Social Shaping of Technology. 2nd ed. Open University Press, 1999. . Robin Williams and David Edge , Research Policy, Vol. 25, 1996, pp. 865–899. Baym, N. K. (2015). Personal connections in the digital age. John Wiley & Sons, pp 51-52. Zulto, J. (2009, April 6). Social Shaping Of Technology. Retrieved May 20, 2018. External links Technological Determinism and Social Choice - deals with both technological determinism and the social shaping of technology social shaping Science and technology studies Social constructionism Technological change
0.784006
0.96182
0.754073
POSDCORB
POSDCORB is an acronym widely used in the field of management and public administration that reflects the classic view of organizational theory. It appeared most prominently in a 1937 paper by Luther Gulick (in a set edited by himself and Lyndall Urwick). However, he first presented the concept in 1935. Initially, POSDCORB was envisioned in an effort to develop public service professionals. In Gulick's own words, the elements are as follows: Planning, Organizing, Staffing, Directing, Co-Ordinating, Reporting and Budgeting. Coining of the acronym In his piece "Notes on the Theory of Organization", a memo prepared while he was a member of the Brownlow Committee, Luther Gulick asks rhetorically "What is the work of the chief executive? What does he do?" POSDCORB is the answer, "designed to call attention to the various functional elements of the work of a chief executive because 'administration' and 'management' have lost all specific content." According to Gulick, the elements are: Planning Organizing Staffing Directing Co-ordinating Reporting Budgeting Elaborations Gulick's "Notes on the Theory of Organization" further defines the patterns of POSDCORB. That document explains how portions of an executive's workload may be delegated, and that some of the elements can be organized as subdivisions of the executive depending on the size and complexity of the enterprise. Under Organizing, Gulick emphasized the division and specialization of labor in a manner that would increase efficiency. Yet Gulick observed that there were limitations. Based on his practical experience, he carefully articulated the many factors. Gulick described how the organization of workers could be done in four ways. According to him, these are related and may be multi-level. Specifically, they are: By the purpose the workers are serving, such as furnishing water, providing education, or controlling crime. Gulick lists these in his organizational tables as vertical organizations. By the process the workers are using, such as engineering, doctoring, lawyering, or statistics. Gulick lists these in his organizational tables as horizontal organizations. By the clientele or material: the persons or things being dealt with, such as immigrants, veterans, forests, mines, or parks in government; or such as a department store's furniture department, clothing department, hardware department, or shoe department in the private sector. By the place where the workers do their work. Gulick stresses how these modes of organization often cross, forming interrelated structures. Organizations like schools may include workers and professionals not in the field of education such as nurses. How they are combined or carefully aggregated into a school — or a school system — is of concern. But the early work of Gulick was not limited to small organizations. He started off his professional career at New York City's Bureau of Municipal Research and advanced to President Franklin D. Roosevelt's Committee on Administrative Management. Under Coordination, Gulick notes that two methods can be used to achieve coordination of divided labor. The first is by organization, or placing workers under managers who coordinate their efforts. The second is by dominance of an idea, where a clear idea of what needs to be done is developed in each worker, and each worker fits their work to the needs of the whole. Gulick notes that these two ideas are not mutually exclusive, and that most enterprises function best when both are utilized. Gulick notes that any manager will have a finite amount of time and energy, and discusses span of control under coordination. Drawing from the work of Henri Fayol, Gulick notes that the number of subordinates that can be handled under any single manager will depend on factors such as organizational stability and the specialization of the subordinates. Gulick stops short of giving a definite number of subordinates that any one manager can control, but authors such as Sir Ian Hamilton and Lyndall Urwick have settled on numbers between three and six. Span of control was later expanded upon and defended in depth by Lyndall Urwick in his 1956 piece The Manager's Span of Control. Under coordination, as well as organization, Gulick emphasizes the theory of unity of command: that each worker should only have one direct superior so as to avoid confusion and inefficiency. Gulick discusses the concept of a holding company which may perform limited coordinating, planning, or budgeting functions. Subsidiary entities may carry out their work with autonomy, but as the holding company allows, based upon their authority and direction. Influence from French administration theory Luther Gulick, one of the Brownlow Committee authors, states that his statement of work of a chief executive is adapted from the functional analysis elaborated by Henri Fayol in his "Industrial and General Administration". Indeed, Fayol's work includes fourteen principles and five elements of management that lay the foundations of Gulick's POSDCORB. Fayol's fourteen principles of management are as follows: Division of Work: The division of work principle declares that staffs function better when assigned tasks according to their specialties. Authority and Responsibility: This principle proposes the requirement for managers or manager like authority in order to effectively direct subordinates to perform their jobs while still being held accountable for their conduct. Discipline: The discipline principle supports strict and clearly defined rules and regulations in the workplace to ensure professional employee behavior and order. Unity of Command: The unity of command doctrine proclaims that employees should only receive command and report to one administrator or boss-like authority figure. Unity of Direction: The unity of direction principle states that there should only be one plan, one objective and one director head for each specific plan. Subordination of Individual Interest to General Interest: The subordination of Individual interest to general interest principle declares that the interests and objectives of the organization overrides the interests of any employee, management staff, or any group. Remuneration of Personnel: The remuneration of personnel principle deems that both staff and management salary should be fairly earned, justifiable and no party should be deceived. Centralization: The centralization principle advocates that managerial decision making should be centralized with orders being delivered from top tier management to the middle management, where the orders are arranged and then clarified for the line staff to execute. Scalar Chain (line of authority with peer level communication): The scalar chain principle contends that communication within the organization should only be one uninterrupted vertical flow of communication and any other type of communication should only occur in times of emergencies and when approved by a manager. Order: The order principle can be interpreted in either of the two ways; some believe this principle refers to giving every material in the organization its right position while other believe it means delegating the right job to the right employee. Equity: The equity principle proclaims that managers should be fair and impartial to their staff but the relationship should still be in compliance with the principle of subordination of individual interest to general. Stability of Tenure of Personnel: The stability of tenure of personnel principle states that management should employ the right staff and properly train them in hopes of retaining their employment for a long time and benefiting the organization through experience and expertise. Initiative: The initiative principle refers to the management and their creativity and their ability to implement them within the organization to ensure growth and success in the organization. Esprit de Corps: The Esprit de Corps principle believes that organizations should promote high morale and unity to retain the best employees for lengthy periods of time. Henri Fayol's influence is also visibly apparent in Luther Gulick's five elements of management discussed as in his book, which are as follows: Planning – examining the future and drawing up plans of actions Organizing – building up the structure (labor and material) of the undertaking Command – maintaining activity among the personnel Co-ordination – unifying and harmonizing activities and efforts Control – seeing that everything occurs in conformity with policies and practices Role in management and public administration history POSDCORB and its humble beginnings in the Brownlow Committee literature is still heavily referenced in today's public administration and politics. Many public administrators even believe the Brownlow documents initiated "the Reorganization Act of 1939, a train of measures that the act set in motion can reasonably be attributed to it". POSDCORB management theories that are also responsible for the administrative reorganization that occurred around 1937, which utilizes Gulick's organizing and coordinating steps in the POSDCORB administrative process providing for more concise departments and even room for new agencies within the government making for a more efficient government. POSDCORB generally fits into the classical management movement, being classified as an element of social scientific management, which was popular in the late 19th and early 20th century. Gulick's POSDCORB patterns were instrumental in highlighting the theory of span of control, or limits on the number of people one manager could supervise, as well as unity of command to the fields of management and public administration. According to notable Public Administration scholars such as Nicholas Henry, POSDCORB, the principles it represents, and subsequent expansions upon the POSDCORB concept form the height of Public Administration in an era when it was seen as just another aspect of the field of management as a whole. Gulick's work has been heavily cited and expanded upon by scholars and practitioners in the fields of management and public administration since the publication of Papers on the Science of Administration in 1937. In his 1987 piece "Deja Vu: French Antecedents of American Public Administration", French public administrator, Daniel Martin notes that virtually all of the principles in American Public Administration up to 1937 and the coining of the POSDCORB acronym, including the POSDCORB principles, were present in the French literature on the subject by 1859, but that this literature had largely been forgotten by the theorists of that era, thus the "re-invention" of these principles in the later French and American literature. Essentially, "The highest goals of the American Administrative State are the same today as they were in 1937 and in 1787: Public administration is first and foremost concerned with upholding the democratic values embedded within our constitutional heritage." Criticisms As early as 1938, literature began appearing in the field of public administration challenging the validity of POSDCORB and the concept that there could even be a rigid set of principles in administration. In 1946 and 1947, prominent Public Administration scholars such as Robert Dahl, Dwight Waldo, and Herbert A. Simon released articles and books criticising POSDCORB and the principles notion. Simon's article Proverbs of Administration challenges the POSDCORB principles by stating "For almost every principle one can find an equally plausible and acceptable contradictory principle." Among other criticisms, Simon states that the POSDCORB principles are an oversimplification of administration. Simon's criticisms largely center around span of control and unity of command, stating that sometimes it is necessary for a subordinate to receive guidance or directives from more than one source, as well as Gulick's division of labor concepts. Other criticisms of Simon involved that there was a lack of evidence for the POSDCORB. Yet others argue that organizations are full of variety and are challenging to control. Strength of POSDCORB POSDCORB generally fits into the Classical Management movement, being classified as an element of scientific management. Gulick POSDCORB principles were instrumental in highlighting the theory of span of control, or limits on the number of people one manager could supervise, as well as the unity of command to the fields of management and public administration. Besides, POSDCORB's strength also calls the 14 principles of management. Support In his 2016 piece "Instantiations of POSDCORB", practitioner Paul Chalekian suggested empirical evidence for POSDCORB involving the adoption of institutions and element support. Notes References Fayol, H. (1949). General and Industrial Management. (C. Storrs, Trans.). London: Sir Isaac Pitman & Sons, LTD. (Original work published 1918) Fitch, L. (1996). Making Democracy Work. Berkeley: Institute of Governmental Studies Press. Gulick, L. H. (1937). Notes on the Theory of Organization. In L. Gulick & L. Urwick (Eds.), Papers on the Science of Administration (pp. 3–45). New York: Institute of Public Administration. Pindur, W.; Rogers, S. E.; and Kim, P. S. (1995). The history of management: a global perspective. 'Journal of Management History, 1 (1), pp. 59–77. Shafritz, Jay and Ott, J. Steven. 2001. Classical Organization Theory. In J. Shafritz & J. Ott (Eds.), Classics of Organization Theory (pp. 27–34). Orlando: Harcourt. Urwick, L. (1933). Organization as a Technical Problem. L. Gulick & L. Urwick (Eds.), Papers on the Science of Administration (pp. 49–88). New York: Institute of Public Administration. Urwick, L. (1956). The Manager's Span of Control. The Harvard Business Review''. May–June, 1956, pp. 39–47. Acronyms Organizational theory Public administration
0.760317
0.991782
0.754068
Open learning
Open learning is an innovative movement in education that emerged in the 1970s and evolved into fields of practice and study. The term refers generally to activities that either enhance learning opportunities within formal education systems or broaden learning opportunities beyond formal education systems. Open learning involves but is not limited to: classroom teaching methods, approaches to interactive learning, formats in work-related education and training, the cultures and ecologies of learning communities, and the development and use of open educational resources. While there is no agreed-upon, comprehensive definition of open learning, central focus is commonly placed on the "needs of the learner as perceived by the learner." Case studies illustrate open learning as an innovation both within and across academic disciplines, professions, social sectors and national boundaries, and in business and industry, higher education institutions, collaborative initiatives between institutions, and schooling for young learners. Inception Open learning as a teaching method is founded on the work of Célestin Freinet in France and Maria Montessori in Italy, among others. Open learning is supposed to allow pupils self-determined, independent and interest-guided learning. A prominent example is the language experience approach to teaching initial literacy (cf. Brügelmann/ Brinkmann 2011). More recent work on open learning has been conducted by the German pedagogues Hans Brügelmann (1975; 1999), Falko Peschel (2002), Jörg Ramseger (1977) and Wulf Wallrabenstein (1991). The approach is supposed to face up to three challenges (cf. in more detail Brügelmann/ Brinkmann 2008, chap. 1): the vast differences in experiences, interests, and competencies between children of the same age; the constructivist nature of learning demanding active problem-solving by the learner him- and herself; the legal requirement of student participation in decisions stipulated by the UN Convention on the Rights of the Child (CRC). of 1989. Current uses of the term The term "open learning" also refers to open and free sharing of educational materials. See also Active learning Alternative education Augmented learning Cooperative learning Didactic method Distance education Experiential education Example choice Language Experience Approach Learning by teaching (LdL) Language exchange Lifelong learning MIT OpenCourseWare MIT Open Learning Open education OpenLearning a social online learning platform for teachers to deliver courses. Open Learning for Development an Open Training Platform sponsored by UNESCO offering free training resources on a wide range of development topics, fostering cooperation to provide free and open content for development. Minimally invasive education, a term used in the deployment of Internet-connected computers in public places to encourage voluntary learning. Self-regulated learning Social learning (social pedagogy) References Notes Further reading Brügelmann, H. (1975): Open curricula—A paradox? In: Cambridge Journal of Education, Vol. 1, No. 5, Lent Term 1975, 12-20. Brügelmann, H. (1999): From invention to convention. Children's different routes to literacy. How to teach reading and writing by construction vs. instruction. In: Nunes, T. (ed.) (1999): Learning to read: An integrated view from research and practice. Kluwer: Dordrecht et al., pp. 315–342. Brügelmann, H./ Brinkmann, E.(2008): Öffnung des Anfangsunterrichts. Theoretische Prinzipien, unterrichtspraktische Ideen und empirische Befunde. Arbeitsgruppe Primarstufe/ Universität: Siegen (2nd ed.. 2009). Brügelmann, H./ Brinkmann, E. (2011): Combining openness and structure in the initial literacy curriculum. A language experience approach for beginning teachers. https://web.archive.org/web/20160303224849/http://www2.agprim.uni-siegen.de/printbrue/brue.bri.language_experience.engl.111124.pdf Giaconia, R.M./ Hedges, L.V. (1982): Identifying features of effective open education. In: Review of Educational Research, Vol. 52, 579-602. Kent, Jeff (1987): Principles of Open Learning, Witan Books, . Peschel, F. (2002a+b): Offener Unterricht – Idee – Realität - Perspektive und ein praxiserprobtes Konzept zur Diskussion. Teil I: Allgemeindidaktische Überlegungen. Teil II: Fachdidaktische Überlegungen. Schneider Verlag Hohengehren: Baltmannsweiler. Peschel, F. (2003): Offener Unterricht - Idee, Realität, Perspektive und ein praxiserprobtes Konzept in der Evaluation. Dissertation. FB 2 der Universität: Siegen/ Schneider Hohengehren: Baltmannsweiler. Ramseger, J. (1977): Offener Unterricht in der Erprobung. Erfahrungen mit einem didaktischen Modell. Juventa: München (3rd ed. 1992). Rothenberg, J. (1989): The open classroom reconsidered. In: The Elementary School Journal, Vol. 90, No. 1, 69-86. Silberman, C.E. (Ed.) (1973): The open classroom Reader. Vintage Books: New York. Wallrabenstein, W. (1991): Offene Schule – offener Unterricht. Ratgeber für Eltern und Lehrer. Rororo-Sachbuch 8752: Reinbek. Educational practices Teaching Philosophy of education Pedagogy
0.786775
0.958365
0.754017
Foresight (futures studies)
In futurology, especially in Europe, the term foresight has become widely used to describe activities such as: critical thinking concerning long-term developments, debate and for some futurists who are normative and focus on action driven by their values who may be concerned with effort to create wider participatory democracy. Foresight is a set of competencies and not a value system, however. shaping the future, especially by influencing public policy. In the last decade, scenario methods, for example, have become widely used in some European countries in policy-making. The FORSOCIETY network brings together national Foresight teams from most European countries, and the European Foresight Monitoring Project is collating material on Foresight activities around the world. In addition, foresight methods are being used more and more in regional planning and decision –making (“regional foresight”). Several non-European think-tanks like Strategic Foresight Group are also engaged in foresight studies. The foresight of futurology is also known as strategic foresight. This foresight used by and describing professional futurists trained in Master's programs is the research-driven practice of exploring expected and alternative futures and guiding futures to inform strategy. Foresight includes understanding the relevant recent past; scanning to collect insight about present, futuring to describe the understood future including trend research; environment research to explore possible trend breaks from developments on the fringe and other divergencies that may lead to alternative futures; visioning to define preferred future states; designing strategies to craft this future; and adapting the present forces to implement this plan. There is notable but not complete overlap between foresight and strategic planning, change management, forecasting, and design thinking. At the same time, the use of foresight for companies (“corporate foresight”) is becoming more professional and widespread Corporate foresight is used to support strategic management, identify new business fields and increase the innovation capacity of a firm. Foresight is not the same as futures research or strategic planning. It encompasses a range of approaches that combine the three components mentioned above, which may be recast as: futures (forecasting, forward thinking, prospectives), planning (strategic analysis, priority setting), and networking (participatory, dialogic) tools and orientations. Much futurology research has been rather ivory tower work, but Foresight programmes were designed to influence policy - often R&D policy. Much technology policy had been very elitist; Foresight attempts to go beyond the "usual suspects" and gather widely distributed intelligence. These three lines of work were already common in Francophone futures studies going by the name la prospective. But in the 1990s we began to see what became an explosion of systematic organisation of these methods in large scale TECHNOLOGY FORESIGHT programmes in Europe and more widely. Foresight thus draws on traditions of work in long-range planning and strategic planning, horizontal policymaking and democratic planning, and participatory futurology - but was also highly influenced by systemic approaches to innovation studies, science and technology policy, and analysis of "critical technologies". Many of the methods that are commonly associated with Foresight - Delphi surveys, scenario workshops, etc. - derive from futurology. The flowchart to the right provides an overview of some of the techniques as they relate to the scenario as defined in the intuitive logics tradition. So does the fact that Foresight is concerned with: The longer-term - futures that are usually at least 10 years away (though there are some exceptions to this, especially in its use in private business). Since Foresight is action-oriented (the planning link) it will rarely be oriented to perspectives beyond a few decades out (though where decisions like aircraft design, power station construction or other major infrastructural decisions are concerned, then the planning horizon may well be half a century). Alternative futures: it is helpful to examine alternative paths of development, not just what is currently believed to be most likely or business as usual. Often Foresight will construct multiple scenarios. These may be an interim step on the way to creating what may be known as positive visions, success scenarios, aspirational futures. Sometimes alternative scenarios will be a major part of the output of Foresight work, with the decision about what future to build being left to other mechanisms. See also Accelerating change Emerging technologies Foresight Institute Forecasting Horizon scanning Optimism bias Reference class forecasting Scenario planning Strategic foresight Strategic Foresight Group Technology forecasting Technology Scouting References Further reading There are numerous journals that deal with research on foresight: Technological Forecasting and Social Change Futures Futures & Foresight Science European Journal of Futures Research Foresight Research focusing more on the combination of foresight and national R&D policy can be found in International Journal of Foresight and Innovation Policy External links The FORLEARN Online Guide developed by the Institute for Prospective Technological Studies of the European Commission The Foresight Programme of UNIDO, the Investment and Technology Promotion Branch of the United Nations Industrial Development Organization. Handbook of Knowledge Society Foresight published by the European Foundation, Dublin Foresight (futures studies) Transhumanism
0.775323
0.972508
0.754008
Physics education
Physics education or physics teaching refers to the education methods currently used to teach physics. The occupation is called physics educator or physics teacher. Physics education research refers to an area of pedagogical research that seeks to improve those methods. Historically, physics has been taught at the high school and college level primarily by the lecture method together with laboratory exercises aimed at verifying concepts taught in the lectures. These concepts are better understood when lectures are accompanied with demonstration, hand-on experiments, and questions that require students to ponder what will happen in an experiment and why. Students who participate in active learning for example with hands-on experiments learn through self-discovery. By trial and error they learn to change their preconceptions about phenomena in physics and discover the underlying concepts. Physics education is part of the broader area of science education. History In Ancient Greece, Aristotle wrote what is considered now as the first textbook of physics. Aristotle's ideas were taught unchanged until the Late Middle Ages, when scientists started making discoveries that didn't fit them. For example, Copernicus' discovery contradicted Aristotle's idea of an Earth-centric universe. Aristotle's ideas about motion weren't displaced until the end of the 17th century, when Newton published his ideas. Today's physics students often think of physics concepts in Aristotelian terms, despite being taught only Newtonian concepts. Teaching strategies Teaching strategies are the various techniques used to facilitate the education of students with different learning styles. The different teaching strategies are intended to help students develop critical thinking and engage with the material. The choice of teaching strategy depends on the concept being taught, and indeed on the interest of the students. Methods/Approaches for teaching physics Lecture: Lecturing is one of the more traditional ways of teaching science. Owing to the convenience of this method, and the fact that most teachers are taught by it, it remains popular in spite of certain limitations (compared to other methods, it does little to develop critical thinking and scientific attitude among students). This method is teacher centric. Recitation: Also known as the Socratic method. In this method, the student plays a greater role than they would in a lecture. The teacher asks questions with the aim of prompting the thoughts of the students. This method can be very effective in developing higher order thinking in pupils. To apply this strategy, the students should be partially informed about the content. The efficacy of the recitation method depends largely on the quality of the questions. This method is student centric. Demonstration: In this method, the teacher performs certain experiments, which students observe and ask questions about. After the demonstration, the teacher can explain the experiment further and test the students' understanding via questions. This method is an important one, as science is not an entirely theoretical subject. Lecture-cum-Demonstration: As its name suggests, this is a combination of two of the above methods: lecture and demonstration. The teacher performs the experiment and explains it simultaneously. By this method, the teacher can provide more information in less time. As with the demonstration method, the students only observe; they do not get any practical experience of their own. It is not possible to teach all topics by this method. Laboratory Activities: Laboratories have students conduct physics experiments and collect data by interacting with physics equipment. Generally, students follow instructions in a lab manual. These instructions often take students through an experiment step-by-step. Typical learning objectives include reinforcing the course content through real-world interaction (similar to demonstrations) and thinking like experimental physicists. Lately, there has been some effort to shift lab activities toward the latter objective by separating from the course content, having students make their own decisions, and calling to question the notion of a "correct" experimental result. Unlike the demonstration method, the laboratory method gives students practical experience performing experiments like professional scientists. However, it often requires a significant amount of time and resources to work properly. Problem-based learning: A group of 8-10 students and a tutor meet together to study a "case" or trigger problem. One student acts as a chair and one as a scribe to record the session. Students interact to understand the terminology and issues of the problem, discussing possible solutions and a set of learning objectives. The group breaks up for private study then return to share results. The approach has been used in many UK medical schools. The technique fosters independence, engagement, development of communication skill, and integration of new knowledge with real world issues. However, the technique requires more staff per student, staff willing to facilitate rather than lecture, and well designed and documented trigger scenarios. The technique has been shown to be effective in teaching physics. Research Physics education research is the study of how physics is taught and how students learn physics. It a subfield of educational research. Worldwide Physics education in Hong Kong Physics education in the United Kingdom See also American Association of Physics Teachers Balsa wood bridge Concept inventory Egg drop competition Feynman lectures Harvard Project Physics Learning Assistant Model List of physics concepts in primary and secondary education curricula Mousetrap car Physical Science Study Committee Physics First SAT Subject Test in Physics Physics Outreach Science education Teaching quantum mechanics Mathematics education Engineering education Discipline-based education research References Further reading PER Reviews: Miscellaneous: Education by subject Occupations
0.768986
0.980497
0.753988
Propædia
The one-volume Propædia is the first of three parts of the 15th edition of Encyclopædia Britannica, intended as a compendium and topical organization of the 12-volume Micropædia and the 17-volume Macropædia, which are organized alphabetically. Introduced in 1974 with the 15th edition, the Propædia and Micropædia were intended to replace the Index of the 14th edition; however, after widespread criticism, the Britannica restored the Index as a two-volume set in 1985. The core of the Propædia is its Outline of Knowledge, which seeks to provide a logical framework for all human knowledge. However, the Propædia also has several appendices listing the staff members, advisors and contributors to all three parts of the Britannica. The last edition of the print Britannica was published in 2010. Outline of Knowledge Like the Britannica as a whole, the Outline has three types of goals: Epistemological: to provide a systematic, hierarchical categorization of all human knowledge, a 20th-century analog of the Great Chain of Being and Francis Bacon's outline in Instauratio magna. Educational: to lay out a course of study for each major discipline, a "roadmap" for learning a whole field. Organizational: to serve as an expanded Table of Contents for the Micropædia and Macropædia. According to Mortimer J. Adler, the designer of the Propaedia, all articles in the full Britannica were designed to fit into the Outline of Knowledge. The Outline has 167 sections, which are categorized into 41 divisions and then into 10 parts. Each part has an introductory essay written by the same individual responsible for developing the outline for that part, which was done in consultation and collaboration with a handful of other scholars. In all, 86 men and one woman were involved in developing the Outline of Knowledge. The Outline was an eight-year project of Mortimer J. Adler, published 22 years after he published a similar effort (the Syntopicon) that attempts to provide an overview of the relationships among the "Great Ideas" in Adler's Great Books of the Western World series. (The Great Books were also published by the Encyclopædia Britannica Inc.) Adler stresses in his book, A Guidebook to Learning: For a Lifelong Pursuit of Wisdom, that the ten categories should not be taken as hierarchical but as circular. Contents 1. Matter and Energy The lead author was Nigel Calder, who wrote the introduction "The Universe of the Physicist, the Chemist, and the Astronomer". 1.1 Atoms 1.1.1 Structure and Properties of Atoms 1.1.2 Atomic Nuclei and Elementary Particles 1.2 Energy, Radiation, and States of Matter 1.2.1 Chemical Elements: Periodic Variation in Their Properties 1.2.2 Chemical Compounds: Molecular Structure and Chemical Bonding 1.2.3 Chemical Reactions 1.2.4 Heat, Thermodynamics, Liquids, Gases, Plasmas 1.2.5 The Solid State of Matter 1.2.6 Mechanics of Particles, Rigid and Deformable Bodies: Elasticity, Vibration, and Flow 1.2.7 Electricity and Magnetism 1.2.8 Waves and Wave Motion 1.3 The Universe 1.3.1 The Cosmos 1.3.2 Galaxies and Stars 1.3.3 The Solar System 2. The Earth The lead author was Peter John Wyllie, who wrote the introduction "The Great Globe Itself". 2.1 Earth's Properties, Structure, Composition 2.1.1 The Planet Earth 2.1.2 Earth's Physical Properties 2.1.3 Structure and Composition of the Earth's Interior 2.1.4 Minerals and Rocks 2.2 Earth's Envelope 2.2.1 The Atmosphere 2.2.2 The Hydrosphere: the Oceans, Freshwater and Ice Masses 2.2.3 Weather and Climate 2.3 Surface Features 2.3.1 Physical Features of the Earth's Surface 2.3.2 Features Produced by Geomorphic Processes 2.4 Earth's History 2.4.1 Origin and Development of the Earth and Its Envelopes 2.4.2 The Interpretation of the Geologic Record 2.4.3 Eras and Periods of Geologic Time 3. Life The lead author was René Dubos, who wrote the introduction "The Mysteries of Life". 3.1 The Nature and Diversity of Life 3.1.1 Characteristics of Life 3.1.2 The Origin and Evolution of Life 3.1.3 Classification of Living Things 3.2 The Molecular Basis of Life 3.2.1 Chemicals and the Vital Processes 3.2.2 Metabolism: Bioenergetics and Biosynthesis 3.2.3 Vital Processes at the Molecular Level 3.3 The Structures and Functions of Organisms 3.3.1 Cellular Basis of Form and Function 3.3.2 Relation of Form and Function in Organisms 3.3.3 Coordination of Vital Processes: Regulation and Integration 3.3.4 Covering and Support: Integumentary, Skeletal, and Musculatory Systems 3.3.5 Nutrition: the Procurement and Processing of Nutrients 3.3.6 Gas Exchange, Internal Transport, and Elimination 3.3.7 Reproduction and Sex 3.3.8 Development: Growth, Differentiation, and Morphogenesis 3.3.9 Heredity: the Transmission of Traits 3.4 The Behavior of Organisms 3.4.1 Nature and Patterns of Behavior 3.4.2 Development and Range of Behavioral Capacities: Individual and Group Behavior 3.5 The Biosphere 3.5.1 Basic Features of the Biosphere 3.5.2 Populations and Communities 3.5.3 Disease and Death 3.5.4 Biogeographic Distribution of Organisms: Ecosystems 3.5.5 The Place of Humans in the Biosphere 4. Human Life The lead author was Loren Eiseley, who wrote the introduction "The Cosmic Orphan". 4.1 The Development of Human Life 4.1.1 Human Evolution 4.1.2 Human Heredity: the Races 4.2 The Human Body: Health and Disease 4.2.1 The Structures and Functions of the Human Body 4.2.2 Human Health 4.2.3 Human Diseases 4.2.4 The Practice of Medicine and Care of Health 4.3 Human Behavior and Experience 4.3.1 General theories of human nature and behavior 4.3.2 Antecedent conditions and developmental processes affecting a person's behavior and conscious experience 4.3.3 Influence of the current environment on a person's behavior and conscious experience: attention, sensation, and perception 4.3.4 Current Internal states affecting a person' behavior and conscious experience 4.3.5 Development of Learning and Thinking 4.3.6 Personality and the Self: Integration and Disintegration 5. Society The lead author was Harold D. Lasswell, who wrote the introduction "Man the Social Animal". 5.1 Social Groups: Ethnic groups and Cultures 5.1.1 Peoples and Cultures of the World 5.1.2 The Development of Human Culture 5.1.3 Major Cultural Components and Institutions of Societies 5.1.4 Language and Communication 5.2 Social Organization and Social Change 5.2.1 Social Structure and Change 5.2.2 The Group Structure of Society 5.2.3 Social Status 5.2.4 Human Populations: Urban and Rural Communities 5.3 The Production, Distribution, and Utilization of Wealth 5.3.1 Economic Concepts, Issues, and Systems 5.3.2 Consumer and Market: Pricing and Mechanisms for Distributing Goods 5.3.3 The Organization of Production and Distribution 5.3.4 The Distribution of Income and Wealth 5.3.5 Macroeconomics 5.3.6 Economic Growth and Planning 5.4 Politics and Government 5.4.1 Political Theory 5.4.2 Political Institutions: the Structure, Branches, & Offices of Government 5.4.3 Functioning of Government: the Dynamics of the Political Process 5.4.4 International Relations: Peace and War 5.5 Law 5.5.1 Philosophies and Systems of Law; the Practice of Law 5.5.2 Branches of Public Law, Substantive and Procedural 5.5.3 Branches of Private Law, Substantive and Procedural 5.6 Education 5.6.1 Aims and Organization of Education 5.6.2 Education Around the World 6. Art The lead author was Mark Van Doren, who wrote the introduction "The World of Art". 6.1 Art in General 6.1.1 Theory and Classification of the Arts 6.1.2 Experience and Criticism of Art; the Nonaesthetic Context of Art 6.1.3 Characteristics of the Arts in Particular Cultures 6.2 Particular Arts 6.2.1 Literature 6.2.2 Theater 6.2.3 Motion Pictures 6.2.4 Music 6.2.5 Dance 6.2.6 Architecture, Garden and Landscape Design, and Urban Design 6.2.7 Sculpture 6.2.8 Drawing, Painting, Printmaking, Photography 6.2.9 Decoration and Design 7. Technology The lead author was Lord Peter Ritchie-Calder, who wrote the introduction "Knowing How and Knowing Why". 7.1 Nature & Development of Technology 7.1.1 Technology: Its Scope and History 7.1.2 The Organization of Human Work 7.2 Elements of Technology 7.2.1 Technology of Energy Conversion and Utilization 7.2.2 Technology of Tools and Machines 7.2.3 Technology of Measurement, Observation, and Control 7.2.4 Extraction and Conversion of Industrial Raw Materials 7.2.5 Technology of Industrial Production Processes 7.3 Fields of Technology 7.3.1 Agriculture and Food Production 7.3.2 Technology of the Major Industries 7.3.3 Construction Technology 7.3.4 Transportation Technology 7.3.5 Technology of Information Processing and of Communications Systems 7.3.6 Military Technology 7.3.7 Technology of the Urban Community 7.3.8 Technology of Earth and Space Exploration 8. Religion The lead author was Wilfred Cantwell Smith, who wrote the introduction "Religion as Symbolism". 8.1 Religion in General 8.1.1 Knowledge and Understanding of Religion 8.1.2 Religious Life: Institutions and Practices 8.2 Particular Religions 8.2.1 Prehistoric Religion and Primitive Religion 8.2.2 Religions of Ancient Peoples 8.2.3 Hinduism and Other Religions of India 8.2.4 Buddhism 8.2.5 Indigenous Religions of East Asia: Religions of China, Korea, and Japan 8.2.6 Judaism 8.2.7 Christianity 8.2.8 Islam 8.2.9 Other Religions and Religious Movements in the Modern World 9. History The lead author was Jacques Barzun, who wrote the introduction "The Point and Pleasure of Reading History". 9.1 Ancient Southwest Asia, North Africa, and Europe 9.1.1 Ancient Southwest Asia and Egypt, the Aegean, and North Africa 9.1.2 Ancient Europe and Classical Civilizations of the Mediterranean to AD 395 9.2 Medieval Southwest Asia, North Africa, and Europe 9.2.1 The Byzantine Empire and Europe from AD 395–1050 9.2.2 The Formative Period in Islamic History, AD 622–1055 9.2.3 Western Christendom in the High and Later Middle Ages 1050–1500 9.2.4 The Crusades, the Islamic States, and Eastern Christendom 1050–1480 9.3 East, Central, South, and Southeast Asia 9.3.1 China to the Beginning of the Late T'ang AD 755 9.3.2 China from the Late T'ang to the Late Ch'ing AD 755–1839 9.3.3 Central and Northeast Asia to 1750 9.3.4 Japan to the Meiji Restoration 1868, Korea to 1910 9.3.5 The Indian Subcontinent and Ceylon to AD 1200 9.3.6 The Indian Subcontinent 1200–1761, Ceylon 1200–1505 9.3.7 Southeast Asia to 1600 9.4 Sub-Saharan Africa to 1885 9.4.1 West Africa to 1885 9.4.2 The Nilotic Sudan and Ethiopia AD 550–1885 9.4.3 East Africa and Madagascar to 1885 9.4.4 Central Africa to 1885 9.4.5 Southern Africa to 1885 9.5 Pre-Columbian America 9.5.1 Andean Civilization to AD 1540 9.5.2 Meso-American Civilization to AD 1540 9.6 The Modern World to 1920 9.6.1 Western Europe 1500–1789 9.6.2 Eastern Europe, Southwest Asia, and North Africa 1480–1800 9.6.3 Europe 1789–1920 9.6.4 European Colonies in the Americas 1492–1790 9.6.5 United States and Canada 1763–1920 9.6.6 Latin-America and Caribbean to 1920 9.6.7 Australia and Oceania to 1920 9.6.8 South Asia Under European Imperialism 1500–1920 9.6.9 Southeast Asia Under European Imperialism 1600–1920 9.6.10 China until Revolution 1839–1911, Japan from Meiji Restoration to 1910 9.6.11 Southwest Asia, North Africa 1800–1920, Sub-Saharan Africa 1885–1920: Under European Imperialism 9.7 The World Since 1920 9.7.1 International Movements, Diplomacy and War Since 1920 9.7.2 Europe Since 1920 9.7.3 The United States and Canada Since 1920 9.7.4 Latin American and Caribbean Nations Since 1920 9.7.5 China in Revolution, Japanese Hegemony 9.7.6 South and Southeast Asia: the Late Colonial Period and Nations Since 1920 9.7.7 Australia and Oceania Since 1920 9.7.8 Southwest Asia and Africa: the Late Colonial Period and Nations since 1920 10. Branches of Knowledge The lead author was Mortimer J. Adler, who wrote the introduction "Knowledge Become Self-conscious". 10.1 Logic 10.1.1 History and Philosophy of Logic 10.1.2 Formal Logic, Metalogic, & Applied Logic 10.2 Mathematics 10.2.1 History and Foundations of Mathematics 10.2.2 Branches of Mathematics 10.2.3 Applications of Mathematics 10.3 Science 10.3.1 History and Philosophy of Science 10.3.2 The Physical Sciences 10.3.3 The Earth Sciences 10.3.4 The Biological Sciences 10.3.5 Medicine 10.3.6 The Social Sciences, Psychology, Linguistics 10.3.7 The Technological Sciences 10.4 History and The Humanities 10.4.1 Historiography 10.4.2 The Humanities and Humanistic Scholarship 10.5 Philosophy 10.5.1 History of Philosophy 10.5.2 Divisions of Philosophy 10.5.3 Philosophical Schools and Doctrines 10.6 Preservation of Knowledge 10.6.1 Institutions and Techniques for the Collection, Storage, Dissemination and Preservation of Knowledge Contributors to the Outline of Knowledge Section 4.2.1 uses transparencies of organ systems originally commissioned by Parke-Davis. Similar in design to the three-dimensional Visible Man and Visible Woman dolls designed by sculptor Marcel Jovine, successive plastic sheets reveal different layers of human anatomy. See also History of the Encyclopædia Britannica Encyclopédie Propaedeutics A historical term for an introductory course into an art or science Threshold knowledge Outline of knowledge Outline of academic disciplines List of academic fields References Encyclopædia Britannica indexes
0.773349
0.974958
0.753982
Signalling (economics)
In contract theory, signalling (or signaling; see spelling differences) is the idea that one party (the agent) credibly conveys some information about itself to another party (the principal). Although signalling theory was initially developed by Michael Spence based on observed knowledge gaps between organisations and prospective employees, its intuitive nature led it to be adapted to many other domains, such as Human Resource Management, business, and financial markets. In Spence's job-market signaling model, (potential) employees send a signal about their ability level to the employer by acquiring education credentials. The informational value of the credential comes from the fact that the employer believes the credential is positively correlated with having the greater ability and difficulty for low-ability employees to obtain. Thus the credential enables the employer to reliably distinguish low-ability workers from high-ability workers. The concept of signaling is also applicable in competitive altruistic interaction, where the capacity of the receiving party is limited. Introductory questions Signalling started with the idea of asymmetric information (a deviation from perfect information), which relates to the fact that, in some economic transactions, inequalities exist in the normal market for the exchange of goods and services. In his seminal 1973 article, Michael Spence proposed that two parties could get around the problem of asymmetric information by having one party send a signal that would reveal some piece of relevant information to the other party. That party would then interpret the signal and adjust their purchasing behaviour accordingly—usually by offering a higher price than if they had not received the signal. There are, of course, many problems that these parties would immediately run into. Effort: How much time, energy, or money should the sender (agent) spend on sending the signal? Reliability: How can the receiver (the principal, who is usually the buyer in the transaction) trust the signal to be an honest declaration of information? Stability: Assuming there is a signalling equilibrium under which the sender signals honestly and the receiver trusts that information, under what circumstances will that equilibrium break down? Job-market signalling In the job market, potential employees seek to sell their services to employers for some wage, or price. Generally, employers are willing to pay higher wages to employ better workers. While the individual may know their own level of ability, the hiring firm is not (usually) able to observe such an intangible trait—thus there is an asymmetry of information between the two parties. Education credentials can be used as a signal to the firm, indicating a certain level of ability that the individual may possess; thereby narrowing the informational gap. This is beneficial to both parties as long as the signal indicates a desirable attribute—a signal such as a criminal record may not be so desirable. Furthermore, signaling can sometimes be detrimental in the educational scenario, when heuristics of education get overvalued such as an academic degree, that is, despite having equivalent amounts of instruction, parties that own a degree get better outcomes—the sheepskin effect. Spence 1973: "Job Market Signaling" paper Assumptions and groundwork Michael Spence considers hiring as a type of investment under uncertainty analogous to buying a lottery ticket and refers to the attributes of an applicant which are observable to the employer as indices. Of these, attributes which the applicant can manipulate are termed signals. Applicant age is thus an index but is not a signal since it does not change at the discretion of the applicant. The employer is supposed to have conditional probability assessments of productive capacity, based on previous experience of the market, for each combination of indices and signals. The employer updates those assessments upon observing each employee's characteristics. The paper is concerned with a risk-neutral employer. The offered wage is the expected marginal product. Signals may be acquired by sustaining signalling costs (monetary and not). If everyone invests in the signal in the exactly the same way, then the signal can't be used as discriminatory, therefore a critical assumption is made: the costs of signalling are negatively correlated with productivity. This situation as described is a feedback loop: the employer updates their beliefs upon new market information and updates the wage schedule, applicants react by signalling, and recruitment takes place. Michael Spence studies the signalling equilibrium that may result from such a situation. He began his 1973 model with a hypothetical example: suppose that there are two types of employees—good and bad—and that employers are willing to pay a higher wage to the good type than the bad type. Spence assumes that for employers, there's no real way to tell in advance which employees will be of the good or bad type. Bad employees aren't upset about this, because they get a free ride from the hard work of the good employees. But good employees know that they deserve to be paid more for their higher productivity, so they desire to invest in the signal—in this case, some amount of education. But he does make one key assumption: good-type employees pay less for one unit of education than bad-type employees. The cost he refers to is not necessarily the cost of tuition and living expenses, sometimes called out of pocket expenses, as one could make the argument that higher ability persons tend to enroll in "better" (i.e. more expensive) institutions. Rather, the cost Spence is referring to is the opportunity cost. This is a combination of 'costs', monetary and otherwise, including psychological, time, effort and so on. Of key importance to the value of the signal is the differing cost structure between "good" and "bad" workers. The cost of obtaining identical credentials is strictly lower for the "good" employee than it is for the "bad" employee. The differing cost structure need not preclude "bad" workers from obtaining the credential. All that is necessary for the signal to have value (informational or otherwise) is that the group with the signal is positively correlated with the previously unobservable group of "good" workers. In general, the degree to which a signal is thought to be correlated to unknown or unobservable attributes is directly related to its value. The result Spence discovered that even if education did not contribute anything to an employee's productivity, it could still have value to both the employer and employee. If the appropriate cost/benefit structure exists (or is created), "good" employees will buy more education in order to signal their higher productivity. The increase in wages associated with obtaining a higher credential is sometimes referred to as the “sheepskin effect”, since “sheepskin” informally denotes a diploma. It is important to note that this is not the same as the returns from an additional year of education. The "sheepskin" effect is actually the wage increase above what would normally be attributed to the extra year of education. This can be observed empirically in the wage differences between 'drop-outs' vs. 'completers' with an equal number of years of education. It is also important that one does not equate the fact that higher wages are paid to more educated individuals entirely to signalling or the 'sheepskin' effects. In reality, education serves many different purposes for individuals and society as a whole. Only when all of these aspects, as well as all the many factors affecting wages, are controlled for, does the effect of the "sheepskin" approach its true value. Empirical studies of signalling indicate it as a statistically significant determinant of wages, however, it is one of a host of other attributes—age, sex, and geography are examples of other important factors. The model To illustrate his argument, Spence imagines, for simplicity, two productively distinct groups in a population facing one employer. The signal under consideration is education, measured by an index y and is subject to individual choice. Education costs are both monetary and psychic. The data can be summarized as: Suppose that the employer believes that there is a level of education y* below which productivity is 1 and above which productivity is 2. Their offered wage schedule W(y) will be: Working with these hypotheses Spence shows that: There is no rational reason for someone choosing a different level of education from 0 or y*. Group I sets y=0 if 1>2-y*, that is if the return for not investing in education is higher than investing in education. Group II sets y=y* if 2-y*/2>1, that is the return for investing in education is higher than not investing in education. Therefore, putting the previous two inequalities together, if 1<y*<2, then the employer's initial beliefs are confirmed. There are infinite equilibrium values of y* belonging to the interval [1,2], but they are not equivalent from the welfare point of view. The higher y* the worse off is Group II, while Group I is unaffected. If no signaling takes place each person is paid their unconditional expected marginal product . Therefore, Group, I is worse off when signaling is present. In conclusion, even if education has no real contribution to the marginal product of the worker, the combination of the beliefs of the employer and the presence of signalling transforms the education level y* in a prerequisite for the higher paying job. It may appear to an external observer that education has raised the marginal product of labor, without this necessarily being true. Another model For a signal to be effective, certain conditions must be true. In equilibrium, the cost of obtaining the credential must be lower for high productivity workers and act as a signal to the employer such that they will pay a higher wage. In this model it is optimal for the higher ability person to obtain the credential (the observable signal) but not for the lower ability individual. The table shows the outcome of low ability person l and high ability person h with and without signal S*: The structure is as follows: There are two individuals with differing abilities (productivity) levels. A higher ability / productivity person: h A lower ability / productivity person : l The premise for the model is that a person of high ability (h) has a lower cost for obtaining a given level of education than does a person of lower ability (l). Cost can be in terms of monetary, such as tuition, or psychological, stress incurred to obtain the credential. Wo is the expected wage for an education level less than S* W* is the expected wage for an education level equal or greater than S* For the individual: Person(credential) - Person(no credential) ≥ Cost(credential) → Obtain credential Person(credential) - Person(no credential) < Cost(credential) → Do not obtain credential Thus, if both individuals act rationally it is optimal for person h to obtain S* but not for person l so long as the following conditions are satisfied. Edit: note that this is incorrect with the example as graphed. Both 'l' and 'h' have lower costs than W* at the education level. Also, Person(credential) and Person(no credential) are not clear. Edit: note that this is ok as for low type "l": , and thus low type will choose Do not obtain credential. Edit: For there to be a separating equilibrium the high type 'h' must also check their outside option; do they want to choose the net pay in the separating equilibrium (calculated above) over the net pay in the pooling equilibrium. Thus we also need to test that: Otherwise high type 'h' will choose Do not obtain credential of the pooling equilibrium. For the employers: Person(credential) = E(Productivity | Cost(credential) ≤ Person(credential) - Person(no credential)) Person(no credential) = E(Productivity | Cost(credential) > Person(credential) - Person(no credential)) In equilibrium, in order for the signalling model to hold, the employer must recognize the signal and pay the corresponding wage and this will result in the workers self-sorting into the two groups. One can see that the cost/benefit structure for a signal to be effective must fall within certain bounds or else the system will fail. IPOs Signaling typically occurs in an IPO, where a company issues out shares to the public market to raise equity capital. This arises due to information asymmetry between potential investors and the company raising capital. Given firms are private before an IPO, prospective investors have limited information about the firm's true value or future prospects, which may lead to market inefficiencies and mispricing. To overcome this information asymmetry, firms may use signaling to communicate their true value to potential investors. Leland and Pyle (1977) analyzed the role of signals within the process of an IPO, finding that companies with good future perspectives and higher possibilities of success ("good companies") should always send clear signals to the market when going public, i.e. the owner should keep control of a significant percentage of the company. In order for this signal to be perceived as reliable, the signal must be too costly to be imitated by "bad companies". By not providing a signal to the market, asymmetric information will result in adverse selection in the IPO market. Various forms of signaling have also been observed during IPOs, especially when companies underprice the offered share price to prospective investors. Underpricing can be explained by prospect theory, which suggests that investors tend to be more risk-averse when it comes to gains than losses. Hence, when a company offers its shares at a discount to their true value, it creates the perception of a gain for investors, which can increase demand for the shares and lead to a higher aftermarket price. This excess demand also sends a positive signal to the market that the firm is undervalued, as the issuer signals to the market that they are leaving money on the table - defined as number of shares sold times the difference between the first-day closing market price and the offer price. This represents a substantial indirect cost to the issuing firm, but allows initial investors to achieve sizeable financial returns at the very first day of trading. In spite of leaving money on the table, underpricing is still beneficial to the firm because it allows them to raise more capital than they would have if they had priced the shares at their true value, assuming a higher price at market close. This also helps to generate positive publicity and media attention for the issuer, providing further signaling for a company's positive growth prospects. Additionally, firms can also signal their quality to the market through their choice of an underwriter. A reputable underwriter, such as a well-known investment bank, can signal that the issuing firm is of high quality and has a strong likelihood of future success. Considering the underwriter's role in providing due diligence and expertise in the IPO process, it is unlikely for an underwriter to associate themselves with firms that have a high likelihood of failure. This helps increase the credibility of the issuing firm, and hence the share capital on offer. Additionally, the underwriter's compensation structure, which is typically based on the success of the IPO, provides an incentive for the underwriter to ensure the success of the IPO. Therefore, by choosing a reputable underwriter, the issuing firm can signal its quality to potential investors, which increases the demand for its shares and can potentially lead to a higher aftermarket price. However, while signaling mechanisms can benefit issuers, they can also impose costs on investors. Information asymmetry can make it difficult for investors to distinguish between true signals of quality and mere attempts to manipulate the market. Moreover, the use of signals can lead to a "winner's curse" where investors overpay for shares that are not worth the price paid. Thus, understanding the costs and benefits of different signaling mechanisms is crucial in improving market efficiency and reducing information asymmetry problems. Brands The development of brand capital is an important strategy firms use to signal quality and reliability to consumers. Waldfogel and Chen (2006) studied the impact of retailers providing information on internet retail sites to the importance of branding as a signalling mechanism. Their study used web visits to branded vendors, unbranded vendors and third party sites which took data and collated it for consumers labelled information intermediaries. The paper did not directly measure the outcome on consumer spending because it did not include actual consumer expenditure on branded or unbranded products. It further acknowledged there is the potential consumer spending deviates from visiting behaviour. Nonetheless, it found using information intermediaries increases the number of consumer visits to unbranded vendors while it also depresses visits to branded vendors. The authors concluded by observing that while branding is a market concentrating mechanism, the internet has the potential to result in reducing market concentration as information provision undermines the effectiveness of brand spending. The extent of its effectiveness depends on the ease and cost effectiveness by which information can be provided. Altruism and Signalling Various studies and experiments have analysed signalling in the context of altruism. Historically, due to the nature of small communities, cooperation was particularly important to ensure human flourishing. Signalling altruism is critical in human societies because altruism is a method of signalling willingness to cooperate. Studies indicate that altruism boosts an individual’s reputation in the community, which in turn enables the individual to reap greater benefits from reputation including increased assistance if they are in need. There is often difficulty in distinguishing between pure altruists who do altruistic acts expecting no benefit to themselves whatsoever and impure altruists who do altruistic acts expecting some form of benefit. Pure altruists will be altruistic irrespective of whether there is anyone observing their conduct, whereas impure altruists will give where their altruism is observed and can be reciprocated. Laboratory experiments conducted by behavioural economists has found that pure altruism is relatively rare. A study conducted by Dana, Weber and Xi Kuang found that in dictator games, the level of proposing 5:5 distributions were much higher when proposers could not excuse their choice by reference to moral considerations. In games where voters were provided by the testers with a mitigating reason they could cite to the other person to explain their decision, 6:1 splits were much more common than fair 50:50 split. Empirical research in real world scenarios shows charitable giving diminishes with anonymity. Anonymous donations are much less common than non-anonymous donations. In respect to donations to a national park, researchers found participants were 25% less generous when their identities were not revealed relative to when they were. They also found donations were subject to reference effects. Participants on average gave less money where researchers told them the average donation was lower than in other instances where the researchers told participants the amount of the average donation was higher. A study on charity runs where donors could reveal only their name, only the amount, their name and amount or remain completely anonymous with no reference to donation amount had three main findings. First, donors that gave a significant amount of money revealed the amounts donated but were more likely to not reveal their names. Second, those who gave small donations were more likely to reveal their names but hide their donations. Third, average donors were most likely to reveal both name and amount information. The researchers noted small donor donations were consistent with free riding behaviour where participants would try and obtain reputation enhancement by noting their donation, without having to donate at levels that would otherwise be necessary to get the same boost if amount information was published. Average donors revealed name and amount to also gain reputation. With respect to high donors, the researchers thought two alternatives were possible. Either, donors did not reveal names because despite high donations signalling high cost altruism there were larger reputational drawbacks to what is perceived to be showboating, or large contributors were genuinely altruistic and wanted to signal the importance of the cause. Revealing amount values the authors thought is more consistent with the latter hypothesis. eBay Motors' Price Premium Signalling has been studied and proposed as a means to address asymmetric information in markets for "lemons". Recently, signalling theory has been applied in used cars market such as eBay Motors. Lewis (2011) examines the role of information access and shows that the voluntary disclosure of private information increases the prices of used cars on eBay. Dimoka et al. (2012) analyzed data from eBay Motors on the role of signals to mitigate product uncertainty. Extending the information asymmetry literature in consumer behavior literature from the agent (seller) to the product, authors theorized and validated the nature and dimensions of product uncertainty, which is distinct from, yet shaped by, seller uncertainty. Authors also found information signals (diagnostic product descriptions and third-party product assurances) to reduce product uncertainty, which negatively affect price premiums (relative to the book values) of the used cars in online used cars markets. Internet-Based Hospitality Exchange In internet-based hospitality exchange networks such as BeWelcome and Warm Showers, hosts do not expect to receive payments from travelers. The relation between traveler and host is rather shaped by mutual altruism. Travelers send homestay requests to the hosts, which the hosts are not obligated to accept. Both networks as non-profit organizations grant trustworthy teams of scientists access to their anonymized data for publication of insights to the benefit of humanity. In 2015, datasets from BeWelcome and Warm Showers were analyzed. Analysis of 97,915 homestay requests from BeWelcome and 285,444 homestay requests from Warm Showers showed general regularity — the less time is spent on writing a homestay request, the less is the probability of being accepted by a host. Low-effort communication aka 'copy and paste requests' obviously sends the wrong signal. Outside options Most signalling models are plagued by a multiplicity of possible equilibrium outcomes. In a study published in the Journal of Economic Theory, a signalling model has been proposed that has a unique equilibrium outcome. In the principal-agent model it is argued that an agent will choose a large (observable) investment level when he has a strong outside option. Yet, an agent with a weak outside option might try to bluff by also choosing a large investment, in order to make the principal believe that the agent has a strong outside option (so that the principal will make a better contract offer to the agent). Hence, when an agent has private information about his outside option, signalling may mitigate the hold-up problem. Foreign policy and international relations Due to the nature of international relations and foreign policy, signaling has long been a topic of interest when analyzing the actions of the agents involved. This study of signaling regarding foreign policy has further allowed economists and academics to understand the actions and reactions of foreign bodies when presented with varying information. Typically when interacting with one another, the actions of these foreign parties are heavily dependent on the proposed actions and reactions of each other. In many cases however, there is an asymmetry of information between the two parties with both looking to aid their own non-mutually beneficial interests. Costly signaling In foreign policy, it is common to see game theory problems such as the prisoner’s dilemma and chicken game occur as the different parties both have a dominating strategy regardless of the actions of the other party. In order to signal to the other parties, and furthermore for the signal to be credible, strategies such as tying hands and sinking costs are often implemented. These are examples of costly signals which typically present some form of assurance and commitment in order to show that the signal is credible and the party receiving the signal should act on the information given. Despite this however, there is still much contention as to whether, in practice, costly signaling is effective. In studies by Quek (2016) it was suggested that decision makers such as politicians and leaders don't seem to interpret and understand signals the way that models suggest they should. Sinking costs and Tying hands A costly signal in which the cost of an action is incurred upfront ("ex ante") is a sunk cost. An example of this would be the mobilization of an army as this sends a clear signal of intentions and the costs are incurred immediately. When the cost of the action is incurred after the decision is made ("ex post") it is considered to be tying hands. A common example is an alliance which does not have a large initial monetary cost yet ties the hands of the parties, as either party would incur significant costs if they abandoned the other party, especially in crises. Theoretically both sinking costs and tying hands are valid forms of costly signaling however they have garnered much criticism due to differing beliefs regarding the overall effectiveness of the methods in altering the likelihood of war. Recent studies such as the Journal of Conflict Resolution suggest that sinking costs and tying hands are both effective in increasing credibility. This was done by finding how the change in the costs of costly signals vary their credibility. Prior to this research studies conducted were binary and static by nature, limiting the capability of the model. This increased the validity of the use of these signaling mechanisms in foreign diplomacy. Effectiveness of signaling through time The initial research into signaling suggested that it was an effective tool in order to manage foreign economic and military affairs however, with time and more thorough analysis problems began to present themselves, these being: Whether or not the extent to which the signal is received and acted upon may not justify the cost of the signal Parties and those who govern them are able to signal in more ways than just through actions Different signals often provoke different responses from different parties (heterogeneity plays a large part in the effectiveness of signals) In Fearon’s original models (Bargaining model of war) the model was simple in that a party would display their intentions, their intended audience would then interpret the signals and act upon them. Thus, creating a perfect scenario which validates the use of signaling. Later in works by Slantchev (2005), it was suggested that due to the nature of using military mobilization as a signal, despite having intentions to avoid war can increase tensions and thus both be a sunk cost and can tie the party’s hands. Furthermore Yarhi-Milo, Kertzer and Renshon (2017) were able to use a more dynamic model to assess the effectiveness of these signals given varying cost levels and reaction levels. See also Countersignalling Forward guidance Impression management Signalling game Stigma management Virtue signalling Handicap Principle References Further reading paper (also available as his Nobel Prize lecture PDF) Asymmetric information Game theory
0.762742
0.988428
0.753915
Selectorate theory
The selectorate theory is a theory of government that studies the interactive relationships between political survival strategies and economic realities. It is first detailed in The Logic of Political Survival, authored by Bruce Bueno de Mesquita of New York University (NYU), Alastair Smith of NYU, Randolph M. Siverson of UC Davis, and James D. Morrow of the University of Michigan. In subsequent years the authors, especially Bueno de Mesquita and Smith, have extended the selectorate theory in various other policy areas through subsequent academic publishings and books. The theory is applicable to all types of organizations with leadership, including (among others) private corporations and non-state actors. The theory is known for its use of continuous variables to classify regimes by describing the ratios of coalitions within the total population. Regimes are classified on a spectrum of coalition size, as opposed to conventional, categorical labels (for example, the authors define conventional democracy as a large coalition regime and autocracy as a small coalition regime). The theory has been applied to a large range of topics including foreign aid, the choice of tax rates by incumbent political leaders, as well as medieval European history. Overview In selectorate theory, three groups of people constrain leaders. These groups are the nominal selectorate, the real selectorate, and the winning coalition. The nominal selectorate, also referred to as the interchangeables, includes every person who has some say in choosing the leader (for example, in an American presidential election, this is all registered voters). The real selectorate, also referred to as the influentials, are those who really choose the leaders (for example, in an American presidential election, the people who cast a vote for one of the candidates). The winning coalition, also referred to as the essentials, are those whose support translates into victory (for example, in an American presidential election, those voters that get a candidate to 270 Electoral College votes). In other countries, leaders may stay in power with the support of much smaller numbers of people, such as senior figures in the security forces, and business oligarchs, in contemporary Russia. The fundamental premise in selectorate theory is that the primary goal of a leader - regardless of secondary policy concerns - is to remain in power. To remain in power, leaders must retain support from every member of their winning coalition. When the winning coalition is small, as in autocracies, the leader will tend to use private goods to satisfy the coalition. When the winning coalition is large, as in democracies, the leader will tend to use public goods to satisfy the coalition. In The Dictator's Handbook, Bueno de Mesquita and Smith state five rules that leaders should use to stay in power: The smaller the winning coalition, the fewer people to satisfy to remain in control. The larger the selectorate, the easier it is to replace dissenters in the winning coalition. Extract as much wealth as you can from the population without provoking a rebellion or an economic recession. Give your essential supporters just enough rewards to keep them loyal. The remaining funds are yours to spend at your discretion, but do not give your essential supporters extra rewards or they will grow too independent and become a threat. Distribution of goods In the selectorate theory, incumbents retain the loyalty of their winning coalition provided they can outcompete any challenger. Incumbents induce this loyalty by offering the members of their winning coalition a mix of public and private goods. A public good is a non-excludable good such as national defense or clean water. A private good is an excludable good, such as luxury items but especially currency. Because public goods are non-excludable, they are enjoyed by all members of the nominal population while private goods are enjoyed only by the members of the winning coalition. Selectorate theory predicts that the ratio of the winning coalition (W) to the selectorate (S) influences leaders' spending habits, particularly their optimal expenditures of both private and public goods. A leader's loyalty norm is the ratio of W/S and measures the chance any member of the selectorate has of being in the winning coalition of the next regime. Loyalty norms closer to 0 indicate higher loyalty of the winning coalition to the leader since members of the winning coalition have a higher probability (modeled as 1-W/S) to be excluded from a future coalition and hence lose their private goods. Loyalty norms closer to 1 incentivize leaders to spend more on public goods and less on private goods, while loyalty norms closer to 0 incentivize leaders to spend less on public goods and more on private goods. Loyalty norms between 1 and 0 offer incentives to mix spending on public goods and private goods. The reason for such allocations is that public goods are a cheaper way to satisfy large winning coalitions (per member of the winning coalition), while private goods are a cheaper way to satisfy small winning coalitions. In all cases, goods expenditures are subject to a budget constraint provided by total revenue (R) and any revenue left over goes to the leader. Selectorate theory can be used to derive the spending habits of organizations, including nations and private organizations. Virtually all organizations spend money on both public and private goods. For countries with large winning coalitions, meaning democracies, leaders spend more on public goods such as infrastructure, education, and regulatory agencies while in countries with small winning coalitions, meaning dictatorships, leaders spend more on private goods such as money transfers and luxury items. However, democracies still provide private goods, such as free healthcare, while dictatorships still provide public goods, such as national defense. Calculating the amount of revenue the leader needs to spend to keep any member of the winning coalition loyal is done with the following formula: This formula is in an expanded form for better illustration. Each member of the winning coalition can expect to earn a proportional share of the revenue, illustrated by the (R/W) term, if they are successfully in the next winning coalition. The chances of this are effectively the loyalty norm, illustrated by the (W/S) term. If they fail to be in the winning coalition they will receive none of the revenue. The chances of this are illustrated with the (1-W/S) term. Leaders therefore only have to spend any amount above the expected payout to keep the members loyal. The amount a leader can keep is . As the loyalty norm becomes weaker, the payout needed for each member of the winning coalition becomes higher. At some point the payout becomes so high that a leader is better off providing public goods which can be used by any member of the winning coalition as opposed to private goods such as direct payouts or corruption. Following this, governments should perform better when they have weak loyalty norms visible through higher levels of economic growth, lower levels of state predation, but much shorter lifespans. In democracies, which have incredibly weak loyalty norms, leaders last incredibly briefly, sometimes changing each election cycle. This mechanism is used to explain why even well performing leaders in democracies spend less time in office than dictators with horrible performance. Government types, leaders, and challenger threats According to the selectorate theory, a leader has the greatest chance of political survival when the selectorate is large and the winning coalition is small, which occurs in an autocracy. This is because those who are in a winning coalition can easily be replaced by other members of the selectorate who are not in the winning coalition. Thus, the costs of defection for those members of the winning coalition can be potentially large, namely the loss of all private goods. The chances of a challenger in replacing the leader are smallest in such an autocratic system since those in the winning coalition are unlikely to defect. The ratio of private to public goods as payoff to the winning coalition is the highest in such a system. A monarchy, where the selectorate is small and the winning coalition is even smaller, provides a challenger with a greater opportunity to overthrow the current leader. This is because the proportion of selectorate members who are also in the winning coalition is relatively large. That is, if a new leader comes to power, chances are a given member of the winning coalition will remain within the coalition. The incentive for defection to attain a greater amount of goods offered by a challenger is not, in this case, outweighed by the risk of not being included in the new winning coalition. Here, the proportion of private goods in relation to public goods is seen declining. A scenario in which both the winning coalition is large and the selectorate is even larger provides the least amount of stability to a leader’s occupancy of power; such a system is a democracy. Here, the proportion of public goods outweighs private goods simply because of the sheer size of the winning coalition; it would be far too costly to provide private goods to every individual member of the winning coalition when the benefits of public goods would be enjoyed by all. Because of this fact—that the leader cannot convince winning coalition members to remain loyal through private good incentives, which are in turn cost-restrictive—the challenger poses the greatest threat to the incumbent. This degree of loyalty to the incumbent leader, whatever the government structure may be, is called the loyalty norm. A scenario where the winning coalition is large and the selectorate is small is logically impossible since the winning coalition is a subset of the selectorate. Case study: Russia Joseph Stalin held more power over the Russian people than the tsars before him because the Soviet Union was a large-selectorate system. Under the tsar, only aristocrats could serve in senior government positions, but the communists abolished the aristocracy and made all Russians equal in principle. This allowed Stalin to appoint anyone he pleased to positions of influence. Nikita Kruschev, for example, came from a peasant background. Stalin's subordinates were therefore very submissive because they knew he could easily replace any of them. Implications of the selectorate theory Foreign Aid Bueno de Mesquita and Smith further applied the selectorate theory to the field of foreign aid. They propose that foreign aid is given to improve the survival of political leaders in both donor and recipient states, but not to help the people of recipient states. Bueno de Mesquita and Smith argue that the size of leader's winning coalition and government revenues affect leader's decision making on policy concession and aid. By analyzing the bilateral aid transfers by Organisation for Economic Co-operation and Development (OECD) nations between 1960 and 2001, they discovered that leaders in aid recipient countries are more likely to grant policy concession for donors when the winning coalition is small because leaders with small winning coalitions can easily reimburse supporters for their concession (such as dictatorial Egypt’s domestically unpopular but internationally desirable normalisation of relations with Israel). As a result, relatively poor, small coalition systems are most likely to get aid, and large coalition systems are least likely to get aid but will get larger aid flows when they do get aid. The conclusion of their study shows that interest exchange is the primary reason for foreign aid practice and OECD members have little humanitarian motivation for aid giving. Nancy Qian's study supported this conclusion by arguing that “The literature shows that the primary purpose of aid is often not to alleviate poverty and that out of all of the foreign aid flows, only 1.69% to 5.25% are given to the poorest twenty percent of countries in any given year" Reception Jessica L.P. Weeks argues that selectorate theory makes flawed assumptions about authoritarian regimes. First, she writes that selectorate theory is wrong in presuming that members of small winning coalitions stand to lose their power if the ruler loses power (she notes that these elites usually have independent sources of power and derive their status from seniority and/or competence). Second, she argues that selectorate theory is wrong in assuming all actors perceive the world in the same way (she notes that different authoritarian regime types should systematically lead to different perceptions by the leaders which would affect the kinds of predictions that selectorate theory can make). In popular culture The Dictator's Handbook was adapted and condensed into a two-part series on YouTube by creator CGP Grey in 2016. References Further reading Political science theories Democracy Political science Comparative politics
0.769188
0.980142
0.753913
Social ecological model
Socio-ecological models were developed to further the understanding of the dynamic interrelations among various personal and environmental factors. Socioecological models were introduced to urban studies by sociologists associated with the Chicago School after the First World War as a reaction to the narrow scope of most research conducted by developmental psychologists. These models bridge the gap between behavioral theories that focus on small settings and anthropological theories. Introduced as a conceptual model in the 1970s, formalized as a theory in the 1980s, and continually revised by Bronfenbrenner until his death in 2005, Urie Bronfenbrenner's Ecological Framework for Human Development applies socioecological models to human development. In his initial theory, Bronfenbrenner postulated that in order to understand human development, the entire ecological system in which growth occurs needs to be taken into account. In subsequent revisions, Bronfenbrenner acknowledged the relevance of biological and genetic aspects of the person in human development. At the core of Bronfenbrenner’s ecological model is the child’s biological and psychological makeup, based on individual and genetic developmental history. This makeup continues to be affected and modified by the child’s immediate physical and social environment (microsystem) as well as interactions among the systems within the environment (mesosystems). Other broader social, political and economic conditions (exosystem) influence the structure and availability of microsystems and the manner in which they affect the child. Finally, social, political, and economic conditions are themselves influenced by the general beliefs and attitudes (macrosystems) shared by members of the society. (Bukatko & Daehler, 1998) In its simplest terms, systems theory is the idea that one thing affects another. The basic idea behind systems theory is that one thing affects another event and existence does not occur in a vacuum but in relation to changing circumstances systems are dynamic and paradoxically retain their own integrity while adapting to the inevitable changes going on around them. Our individual and collective behaviour is influenced by everything from our genes to the political environment. It is not possible to fully understand our development and behaviour without taking into account all of these elements. And indeed, this is what some social work theories insist that we do if we are to make effective interventions. Lying behind these models is the idea that everything is connected, everything can affect everything else. Complex systems are made up of many parts. It is not possible to understand the whole without recognizing how the component parts interact, affect and change each other. As the parts interact, they create the character and function of the whole. From systems thinking to socioecological models A system can be defined as a comparatively bounded structure consisting of interacting, interrelated, or interdependent elements that form a whole. Systems thinking argues that the only way to fully understand something or an occurrence is to understand the parts in relation to the whole. Thus, systems thinking, which is the process of understanding how things influence one another within a whole, is central to ecological models. Generally, a system is a community situated within an environment. Examples of systems are health systems, education systems, food systems, and economic systems. Drawing from natural ecosystems which are defined as the network of interactions among organisms and between organisms and their environment, social ecology is a framework or set of theoretical principles for understanding the dynamic interrelations among various personal and environmental factors. Social ecology pays explicit attention to the social, institutional, and cultural contexts of people-environment relations. This perspective emphasizes the multiple dimensions (example: physical environment, social and cultural environment, personal attributes), multiple levels (example: individuals, groups, organizations), and complexity of human situations (example: cumulative impact of events over time). Social ecology also incorporates concepts such as interdependence and homeostasis from systems theory to characterize reciprocal and dynamic person-environment transactions., Individuals are key agents in ecological systems. From an ecological perspective, the individual is both a postulate (a basic entity whose existence is taken for granted) and a unit of measurement. As a postulate, an individual has several characteristics. First an individual requires access to an environment, upon which they are dependent for knowledge. Second, they are interdependent with other humans; that is, is always part of a population and cannot exist otherwise. Third, an individual is time bound, or has a finite life cycle. Fourth, they have an innate tendency to preserve and expand life. Fifth, they have capacity for behavioral variability. Social ecological models are thus applicable to the processes and conditions that govern the lifelong course of human development in the actual environment in which human beings live. Urie Bronfenbrenner's Ecological Framework for Human Development is considered to be the most recognized and utilized social ecological model (as applied to human development). Ecological systems theory considers a child's development within the context of the systems of relationship that form his or her environment. Bronfenbrenner's ecological framework for human development Bronfenbrenner's ecological framework for human development was first introduced in the 1970s as a conceptual model and became a theoretical model in the 1980s. Two distinct phases of the theory can be identified. Bronfenbrenner stated that "it is useful to distinguish two periods: the first ending with the publication of the Ecology of Human Development (1979), and the second characterized by a series of papers that called the original model into question." Bronfenbrenner's initial theory illustrated the importance of place to aspects of the context, and in the revision, he engaged in self-criticism for discounting the role a person plays in his or her own development while focusing too much on the context. Although revised, altered, and extended, the heart of Bronfenbrenner's theory remains the ecological-stressing person-context interrelatedness. The Bronfenbrenner ecological model examines human development by studying how human beings create the specific environments in which they live. In other words, human beings develop according to their environment; this can include society as a whole and the period in which they live, which will impact behavior and development. This views behavior and development as a symbiotic relationship, which is why this is also known as the “bioecological” model. Ecological systems theory Bronfenbrenner made his Ecological systems theory to explain how everything in a child and the child's environment affects how a child grows and develops. In his original theory, Bronfenbrenner postulated that in order to understand human development, the entire ecological system in which growth occurs needs to be taken into account. This system is composed of five socially organized subsystems that support and guide human development. Each system depends on the contextual nature of the person's life and offers an evergrowing diversity of options and sources of growth. Furthermore, within and between each system are bi-directional influences. These bi-directional influences imply that relationships have impact in two directions, both away from the individual and towards the individual. Because we potentially have access to these subsystems we are able to have more social knowledge, an increased set of possibilities for learning problem solving, and access to new dimensions of self-exploration. Microsystem The microsystem is the layer closest to the child and contains the structures with which the child has direct contact. The microsystem encompasses the relationships and interactions a child has with his or her immediate surroundings such as family, school, neighborhood, or childcare environments. At the microsystem level, bi-directional influences are strongest and have the greatest impact on the child. However, interactions at outer levels can still impact the inner structures. This core environment stands as the child's venue for initially learning about the world. As the child's most intimate learning setting, it offers him or her a reference point for the world. The microsystem may provide the nurturing centerpiece for the child or become a haunting set of memories. The real power in this initial set of interrelations with family for the child is what they experience in terms of developing trust and mutuality with their significant people. The family is the child's early microsystem for learning how to live. The caring relations between child and parents (or other caregivers) can help to influence a healthy personality. For example, the attachment behaviors of parents offer children their first trust-building experience. Mesosystem The mesosystem moves us beyond the dyad or two-party relation. Mesosystems connect two or more systems in which child, parent and family live. Mesosystems provide the connection between the structures of the child's microsystem. For example, the connection between the child's teacher and his parents, between his church and his neighborhood, each represent mesosystems. Exosystem The exosystem defines the larger social system in which the child does not directly function. The structures in this layer impact the child's development by interacting with some structure in his/her microsystem. Parent workplace schedules or community-based family resources are examples. The child may not be directly involved at this level, but they do feel the positive or negative force involved with the interaction with their own system. The main exosystems that indirectly influence youth through their family include: school and peers, parents' workplace, family social networks and neighborhood community contexts, local politics and industry. Exosystems can be empowering (example: a high quality child-care program that benefits the entire family) or they can be degrading (example: excessive stress at work impacts the entire family). Furthermore, absence from a system makes it no less powerful in a life. For example, many children realise the stress of their parent's workplaces without ever physically being in these places. Macrosystem The macrosystem is the larger cultural context, such as attitudes and social conditions within the culture where the child is located. Macrosystems can be used to describe the cultural or social context of various societal groups such as social classes, ethnic groups, or religious affiliates. This layer is the outermost layer in the child's environment. The effects of larger principles defined by the macrosystem have a cascading influence throughout the interactions of all other layers. The macrosystem influences what, how, when and where we carry out our relations. For example, a program like Women, Infants, and Children (WIC) may positively impact a young mother through health care, vitamins, and other educational resources. It may empower her life so that she, in turn, is more effective and caring with her newborn. In this example, without an umbrella of beliefs, services, and support for families, children and their parents are open to great harm and deterioration. In a sense, the macrosytem that surrounds us helps us to hold together the many threads of our lives. Chronosystem The chronosystem encompasses the dimension of time as it relates to a child's environment. Elements within this system can be either external, such as the timing of a parent's death, or internal, such as the physiological changes that occur with the aging of a child. Family dynamics need to be framed in the historical context as they occur within each system. Specifically, the powerful influence that historical influences in the macrosystem have on how families can respond to different stressors. Bronfenbrenner suggests that, in many cases, families respond to different stressors within the societal parameters existent in their lives. Process person context time model Bronfenbrenner's most significant departure from his original theory is the inclusion of processes of human development. Processes, per Bronfenbrenner, explain the connection between some aspect of the context or some aspect of the individual and an outcome of interest. The full, revised theory deals with the interaction among processes, person, context and time, and is labeled the Process–Person–Context–Time model (PPCT). Two interdependent propositions define the properties of the model. Furthermore, contrary to the original model, the Process–Person–Context–Time model is more suitable for scientific investigation. Per Bronfenbrenner: "Proposition 1: In its early phase and throughout the lifecourse, human development takes place through processes of progressively more complex reciprocal interactions between an active, evolving biopsychological human organism and the persons, objects and symbols in its immediate environment. To be effective, the interaction must occur on a fairly regular basis over extended periods of time. These forms of interaction in the immediate environment are referred to as proximal processes. Proposition 2: the form, power and content and direction of the proximal processes affecting development vary systematically as a joint function of the characteristics of the developing person, of the environment-immediate and more remote-in which the processes are taking place and the nature of the developmental outcome under consideration." Processes play a crucial role in development. Proximal processes are fundamental to the theory. They constitute the engines of development because it is by engaging in activities and interactions that individuals come to make sense of their world, understand their place in it, and both play their part in changing the prevailing order while fitting into the existing one. The nature of proximal processes varies according to aspects of the individual and of the context—both spatially and temporally. As explained in the second of the two central propositions, the social continuities and changes occur overtime through the life course and the historical period during which the person lives. Effects of proximal processes are thus more powerful than those of the environmental contexts in which they occur. Person. Bronfenbrenner acknowledges here the relevance of biological and genetic aspects of the person. However, he devoted more attention to the personal characteristics that individuals bring with them into any social situation. He divided these characteristics into three types' demand, resource, and force characteristics. Demand characteristics are those that act as an immediate stimulus to another person, such as age, gender, skin color, and physical appearance. These types of characteristics may influence initial interactions because of the expectations formed immediately. Resource characteristics are those that relate partly to mental and emotional resources such as past experiences, skills, and intelligence, and also to social and material resources (access to good food, housing, caring parents, and educational opportunities appropriate to the needs of the particular society). Finally, force characteristics are those that have to do with differences of temperament, motivation, and persistence. According to Bronfenbrenner, two children may have equal resource characteristics, but their developmental trajectories will be quite different if one is motivated to succeed and persists in tasks and the other is not motivated and does not persist. As such, Bronfenbrenner provided a clearer view of individuals' roles in changing their context. The change can be relatively passive (a person changes the environment simply by being in it), to more active (the ways in which the person changes the environment are linked to his or her resource characteristics, whether physical, mental, or emotional), to most active (the extent to which the person changes the environment is linked, in part, to the desire and drive to do so, or force characteristics). The context, or environment, involves four of the five interrelated systems of the original theory: the microsystem, the mesosystem, the exosystem, and the macrosystem. The final element of the PPCT model is time. Time plays a crucial role in human development. In the same way that both context and individual factors are divided into sub-factors or sub-systems, Bronfenbrenner and Morris wrote about time as constituting micro-time (what is occurring during the course of some specific activity or interaction), meso-time (the extent to which activities and interactions occur with some consistency in the developing person's environment), and macro-time (the chronosystem). Time and timing are equally important because all aspects of the PPCT model can be thought of in terms of relative constancy and change. Applications The application of social ecological theories and models focus on several goals: to explain the person-environment interaction, to improve people-environment transactions, to nurture human growth and development in particular environments, and to improve environments so they support expression of individual's system's dispositions. Some examples are: Political and economic policies that support the importance of parent's roles in their children's development such as Head Start or Women Infants and Children programs. Fostering of societal attitudes that value work done on behalf of children at all levels: parents, teachers, extended family, mentors, work supervisors, legislators. In community health promotion: identifying high impact leverage points and intermediaries within organizations that can facilitate the successful implementation of health promoting interventions, combining person focused and environmentally based components within comprehensive health promotion programs, and measuring the scope and sustainability of intervention outcomes over prolonged periods. Basis of intervention programs to address issues such as bullying, obesity, overeating and physical activity. Interventions that use the social ecological model as a framework include mass media campaigns, social marketing, and skills development. In economics: economics, human habits, and cultural characteristics are shaped by geography. In economics, an output is a function of natural resources, human resources, capital resources, and technology. The environment (macrosystem) dictates a considerable amount to the lifestyle of the individual and the economy of the country. For instance, if the region is mountainous or arid and there is little land for agriculture, the country typically will not prosper as much as another country that has greater resources. In risk communication: used to assist the researcher to analyze the timing of when information is received and identify the receivers and stakeholders. This situation is an environmental influence that may be very far reaching. The individual's education level, understanding, and affluence may dictate what information he or she receives and processes and through which medium. In personal health: to prevent illnesses, a person should avoid an environment in which they may be more susceptible to contracting a virus or where their immune system would be weakened. This also includes possibly removing oneself from a potentially dangerous environment or avoiding a sick coworker. On the other hand, some environments are particularly conducive to health benefits. Surrounding oneself with physically fit people will potentially act as a motivator to become more active, diet, or work out at the gym. The government banning trans fat may have a positive top-down effect on the health of all individuals in that state or country. In human nutrition: used as a model for nutrition research and interventions. The social ecological model looks at multiple levels of influence on specific health behaviors. Levels include intrapersonal (individual's knowledge, demographics, attitudes, values, skills, behavior, self-concept, self-esteem), interpersonal (social networks, social supports, families, work groups, peers, friends, neighbors), organizational (norms, incentives, organizational culture, management styles, organizational structure, communication networks), community (community resources, neighborhood organizations, folk practices, non-profit organizations, informal and formal leadership practices), and public policy level (legislation, policies, taxes, regulatory agencies, laws) Multi-level interventions are thought to be most effective in changing behavior. In public health: drawing upon this model to address the health of a nation's population is viewed as critically important to the strategic alignment of policy and services across the continuum of population health needs, including the design of effective health promotion and disease prevention and control strategies. Thus also, in the development of universal health care systems, it is appropriate to recognize "Health in All Policies" as the overarching policy framework, with public health, primary health care and community services as the cross-cutting framework for all health and health-related services operating across the spectrum from primary prevention to long term care and end-stage conditions. Although this perspective is both logical and well grounded, the reality is different in most settings, and there is room for improvement everywhere. In politics: the act of politics is making decisions. A decision may be required of an individual, organization, community, or country. A decision a congressman makes affects anyone in his or her jurisdiction. If one makes the decision not to vote for the President of the United States, one has given oneself no voice in the election. If many other individuals choose not to voice their opinion and/or vote, they have inadvertently allowed a majority of others to make the decision for them. On the international level, if the leadership of the U.S. decides to occupy a foreign country it not only affects the leadership; it also affects U.S. service members, their families, and the communities they come from. There are multiple cross-level and interactive effects of such a decision. Criticism Although generally well received, Urie Bronfenbrenner's models have encountered some criticism throughout the years. Most criticism center around the difficulties to empirically test the theory and model and the broadness of the theory that makes it challenging to intervene at an any given level. One main critique of Brenfenbrenner's Biological model is that it "...focuses too much on the biological and cognitive aspects of human development, but not much on socioemotional aspect of human development". Some examples of critiques of the theory are: Challenging to evaluate all components empirically. Difficult explanatory model to apply because it requires extensive scope of ecological detail with which to build up meaning that everything in someone's environment needs to be taken into account. Failure to acknowledge that children positively cross boundaries to develop complex identities. Inability to recognize that children's own constructions of family are more complex than traditional theories account for The systems around children are not always linear. Preoccupation with achieving "normal" childhood without a common understanding of "normal". Fails to see that the variables of social life are in constant interplay and that small variables can change a system. Misses the tension between control and self-realization in child-adult relationships; children can shape culture. Underplays abilities, overlooks rights/feelings/complexity. Gives too little attention to biological and cognitive factors in children's development. Does not address developmental stages that are the focus of theories like Piaget's and Erikson's. Key contributors Urie Bronfenbrenner Ernest Burgess Garrett Hardin Amos H. Hawley Alastair McIntosh John G. Oetzel Robert E. Park Daniel Stokols See also Environmental sociology Ecology Sociology Psychology Social Psychology Bioecological model Ecosystem Ecosystem ecology Systems ecology Systems psychology Theoretical ecology Systems thinking References Further reading Bronfenbrenner, U. (1977). Toward an experimental ecology of human development. American Psychologist, 32, 513-531. Bronfenbrenner, U. (1979). The Ecology of Human Development: Experiments by Nature and Design. Cambridge, MA: Harvard University Press. Bronfenbrenner, U. (1986). Ecology of the family as a context for human development: Research perspectives. Developmental Psychology, 22(6), 723-742. Bronfenbrenner, U. (1988). Interacting systems in human development. Research paradigms: Present and future. In N. Bolger, A. Caspi, G. Downey, & M. Moorehouse (Eds.), Persons in context: Developmental processes (pp. 25–49). Cambridge: Cambridge University Press. Bronfenbrenner, U. (1989). Ecological systems theory. In R. Vasta (Ed.), Annals of child development, Vol. 6 (pp. 187–249). Greenwich, CT: JAI Press. Bronfenbrenner, U. (1993). The ecology of cognitive development: Research models and fugitive findings. In R. Wonziak & K. Fischer (Eds.), Development in context: Acting and thinking in specific environments (pp. 3–44). Hillsdale, NJ: Erlbaum. Bronfenbrenner, U. (1994). Ecological models of human development. In T. Husen & T. N. Postlethwaite (Eds.), International Encyclopedia of Education (2nd Ed., Vol. 3, pp. 1643– 1647). Oxford, England: Pergamon Press. Bronfenbrenner, U. (1995). Developmental ecology through space and time: A future perspective. In P. Moen, G. H. Elder, Jr., K. Lüscher (Eds.), Examining lives in context: Perspectives on the ecology of human development (pp. 619–647). Washington, DC: American Psychological Association. Bronfenbrenner, U. (1999). Environments in developmental perspective: Theoretical and operational models. In S. L. Friedman & T. D. Wachs (Eds.), Measuring environment across the life span: Emerging methods and concepts (pp. 3–28). Washington, DC: American Psychological Association Press. Bronfenbrenner, U. (2005). Making human beings human: Bioecological perspectives on human development. Thousand Oaks, CA: Sage Publications. Bronfenbrenner, U. & Ceci, S. J. (1994). Nature-nurture reconceptualized in developmental perspective: A biological model. Psychological Review, 101, 568-586. Bronfenbrenner, U., & Crouter, A. C. (1983). The evolution of environmental models in developmental research. In P. H. Mussen (Series Ed.) & W. Kessen (Vol. Ed.), Handbook of child psychology, Vol. 1: History, theory, methods (4th ed., pp. 357–414). New York: Wiley. Bronfenbrenner, U., & Evans, G. W. (2000). Developmental science in the 21st century: Emerging questions, theoretical models, research designs and empirical findings. Social Development, 9(1), 115-125. Bronfenbrenner, U. & Morris, P. A. (1998). The ecology of developmental processes. In W. Damon & R. M. Lerner (Eds.), Handbook of child psychology, Vol. 1: Theoretical models of human development (5th ed., pp. 993–1023). New York: John Wiley and Sons, Inc. Bronfenbrenner, U., & Morris, P. A. (2006). The bioecological model of human development. In W. Damon & R. M. Lerner (Eds.), Handbook of child psychology, Vol. 1: Theoretical models of human development (6th ed., pp. 793–828). New York: John Wiley. Dede Paquette & John Ryan.(2001). Bronfenbrenner's Ecological Systems Theory Arch G. Woodside, Marylouise Caldwell, Ray Spurr. (2006). Advancing Ecological Systems Theory in Lifestyle, Leisure, and Travel Research, in: Journal of Travel Research, Vol. 44, No. 3, 259–272. Kail, R. V., & Cavanaugh, J. C. (2010). The Study of Human Development. Human Development: A Life-span View (5th ed.). Belmont, CA: Wadsworth Cengage Learning. Gregson, J.(2001). System, environmental, and policy changes: Using the social-ecological model as a framework for evaluating nutrition education and social marketing programs with low-income audiences. Journal of Nutrition Education, 33(1), 4-15. Guerrero, L. K., & La Valley, A. G.(2006). Conflict, emotion, and communication. In J. G. Oetzel, & S. Ting-Toomey (Eds.), The SAGE handbook of conflict communication. Thousand Oaks, CA: Sage, 69-96. Hawley, A. H.(1950). Human ecology: A theory of community structure. New York: Ronald Press. Lewin, K.(1935). A dynamic theory of personality. New York: McGraw-Hill. McLeroy, K. R., Bibeau, D., Steckler, A., & Glanz, K. (1988). An ecological perspective on health promotion programs. Health Education Quarterly, 15, 351-377. Oetzel, J. G., Ting-Toomey, S., Rinderle, S.(2006). Conflict communication in contexts: A social ecological perspective. In J. G. Oetzel & S. Ting-*Toomey (Eds.), The SAGE handbook of conflict communication. Thousand Oaks, CA: Sage. Stokols, D.(1996). Translating social ecological theory into guidelines for community health promotion. American Journal of Health Promotion, 10, 282-298. Social ecology
0.761464
0.990074
0.753905
Adhocracy
Adhocracy is a flexible, adaptable, and informal form of organization defined by a lack of formal structure and employs specialized multidisciplinary teams grouped by function. It operates in a fashion opposite to bureaucracy. Warren Bennis coined the term in his 1968 book The Temporary Society. Alvin Toffler popularized the term in 1970 with his book, Future Shock, and has since become often used in the management theory of organizations (particularly online organizations). The concept has been further developed by academics such as Henry Mintzberg. Adhocracy is the system of adaptive, creative, and flexible integrative behavior based on non-permanence and spontaneity. These characteristics are believed to allow adhocracy to respond faster than traditional bureaucratic organizations while being more open to new ideas. Overview Robert H. Waterman, Jr. defines adhocracy as "any form of organization that cuts across normal bureaucratic lines to capture opportunities, solve problems, and get results". For Henry Mintzberg, an adhocracy is a complex and dynamic organizational form. It is different from bureaucracy; like Toffler, Mintzberg considers bureaucracy a thing of the past, and adhocracy one of the future. When done well, adhocracy can be very good at problem solving and innovation and thrive in diverse environments. It requires sophisticated and often automated technical systems to develop and thrive. Academics have described Wikipedia as an adhocracy. Characteristics Some characteristics of Mintzberg's definition include: highly organic structure little formalization of behavior job specialization not necessarily based on formal training a tendency to group the specialists in functional units for housekeeping purposes but to deploy them in small, market-based project teams to do their work a reliance on liaison devices to encourage mutual adjustment within and between these teams low or no standardization of procedures roles not clearly defined selective decentralization work organization rests on specialized teams power-shifts to specialized teams horizontal job specialization high cost of communication culture based on non-bureaucratic work All members of an organization have the authority within their areas of specialization, and in coordination with other members, to make decisions and to take actions affecting the future of the organization. There is an absence of hierarchy. According to Robert H. Waterman, Jr., "Teams should be big enough to represent all parts of the bureaucracy that will be affected by their work, yet small enough to get the job done efficiently." Types administrative – "feature an autonomous operating core; usually in an institutionalized bureaucracy like a government department or standing agency" operational – solves problems on behalf of its clients Alvin Toffler claimed in his book Future Shock that adhocracies will get more common and are likely to replace bureaucracy. He also wrote that they will most often come in form of a temporary structure, formed to resolve a given problem and dissolved afterwards. An example are cross-department task forces. Issues Downsides of adhocracies can include "half-baked actions", personnel problems stemming from organization's temporary nature, extremism in suggested or undertaken actions, and threats to democracy and legality rising from adhocracy's often low-key profile. To address those problems, researchers in adhocracy suggest a model merging adhocracy and bureaucracy, the bureau-adhocracy. Etymology The word is a portmanteau of the Latin ad hoc, meaning "for the purpose", and the suffix -cracy, from the ancient Greek kratein (κρατεῖν), meaning "to govern", and is thus a heteroclite. Use in fiction The term is also used to describe the form of government used in the science fiction novels Voyage from Yesteryear by James P. Hogan and Down and Out in the Magic Kingdom, by Cory Doctorow. In the radio play Das Unternehmen Der Wega (The Mission of the Vega) by Friedrich Dürrenmatt, the human inhabitants of Venus, all banished there from various regions of Earth for civil and political offenses, form and live under a peaceful adhocracy, to the frustration of delegates from an Earth faction who hope to gain their cooperation in a war brewing on Earth. In the Metrozone series of novels by Simon Morden, The novel The Curve of the Earth features "ad-hoc" meetings conducted virtually, by which all decisions governing the Freezone collective are taken. The ad-hocs are administered by an artificial intelligence and polled from suitably qualified individuals who are judged by the AI to have sufficient experience. Failure to arrive at a decision results in the polling of a new ad-hoc, whose members are not told of previous ad-hocs before hearing the decision which must be made. The asura in the fictional world of Tyria within the Guild Wars universe present this form of government, although the term is only used in out-of-game lore writings. See also Anarchy Affinity group Bureaucracy (considered the opposite of adhocracy) Crowdsourcing Commons-based peer production Free association Here Comes Everybody Holacracy Libertarianism Self-management Social peer-to-peer processes Socialism Sociocracy Spontaneous order The Tyranny of Structurelessness Union of egoists Workplace democracy References Sources Adhocracy by Robert H. Waterman, Jr. Future Shock by Alvin Toffler Forms of government Organization design Libertarian theory 1970 introductions Types of organization
0.765205
0.985202
0.753881
Structural inequality
Structural inequality occurs when the fabric of organizations, institutions, governments or social networks contains an embedded cultural, linguistic, economic, religious/belief, physical or identity based bias which provides advantages for some members and marginalizes or produces disadvantages for other members. This can involve, personal agency, freedom of expression, property rights, freedom of association, religious freedom,social status, or unequal access to health care, housing, education, physical, cultural, social, religious or political belief, financial resources or other social opportunities. Structural inequality is believed to be an embedded part of all known cultural groups. The global history of slavery, serfdom, indentured servitude and other forms of coerced cultural or government mandated labour or economic exploitation that marginalizes individuals and the subsequent suppression of human rights ( see UDHR) are key factors defining structural inequality. Structural inequality can be encouraged and maintained in society through structured institutions such as state governments, and other cultural institutions like government run school systems with the goal of maintaining the existing governance/tax structure regardless of wealth, employment opportunities, and social standing of different identity groups by keeping minority students from high academic achievement in high school and college as well as in the workforce of the country. In the attempt to equalize allocation of state funding, policymakers evaluate the elements of disparity to determine an equalization of funding throughout school districts.(14) Formal equality of opportunity disregards collective dimensions of inequality, which are addressed by substantive equality with equality of outcomes for each group. Combating structural inequality therefore often requires the broad, policy based structural change on behalf of government organizations, and is often a critical component of poverty reduction. In many ways, a well-organized democratic government that can effectively combine moderate growth with redistributive policies stands the best chance of combating structural inequality. Education Education is the base for equality. Specifically in the structuring of schools, the concept of tracking is believed by some scholars to create a social disparity in providing students an equal education. Schools have been found to have a unique acculturative process that helps to pattern self-perceptions and world views. Schools not only provide education but also a setting for students to develop into adults, form future social status and roles, and maintain social and organizational structures of society. Tracking is an educational term that indicates where students will be placed during their secondary school years.[3] "Depending on how early students are separated into these tracks, determines the difficulty in changing from one track to another" (Grob, 2003, p. 202). Tracking or sorting categorizes students into different groups based on standardized test scores. These groups or tracks are vocational, general, and academic. Students are sorted into groups that will determine educational and vocational outcomes for the future. The sorting that occurs in the educational system parallels the hierarchical social and economic structures in society. Thus, students are viewed and treated differently according to their individual track. Each track has a designed curriculum that is meant to fit the unique educational and social needs of each sorted group. Consequently, the information taught as well as the expectations of the teachers differ based on the track resulting in the creation of dissimilar classroom cultures. Spatial/regional Globally, the issue of spatial inequality is largely a result of disparities between urban and rural areas. A study commissioned by the United Nations University WIDER project has shown that for the twenty-six countries included in the study, spatial inequalities have been high and on the increase, especially for developing nations. Many of these inequalities were traced back to “second nature” geographic forces that describe the infrastructure a society has in place for facilitating the trade of goods and employment between economic agents. Another dominant and related factor is the ease of access to bodies of water and forms of long-distance trade like ports. The discrepancies between the growth of communities close to these bodies of water and those further away have been noted in cases between and within countries. In the United States and many other developed countries, spatial inequality has developed into more specific forms described by residential segregation and housing discrimination. This has especially come into focus as education and employment are often tied into where a household is located relative to urban centers, and a variety of metrics, from education levels to welfare benefits have been correlated to spatial data. Consequences Specifically, studies have identified a number of economic consequences of housing segregation. Perhaps the most obvious is the isolation of minorities, which creates a deficit in the potential for developing human capital. Second, many of the public schools that areas of low socioeconomic status have access to are underperforming, in part due to the limited budgeting the district receives from the limited tax base in the same area. Finally, another large factor is simply the wealth and security homeownership represents. Property values rarely increase in areas where poverty is high in the first place. Causes The causes of spatial inequality, however, are more complex. The mid-20th century phenomenon of the large-scale migration of white middle-class families from urban centers has coined the term white flight. While the current state of housing discrimination can be partly attributed to this phenomenon, a larger set of institutionalized discrimination, like bias in loan and real estate industries and government policies, have helped to perpetuate the division created since then. These include bias found in the banking and real estate industries as well as discriminatory public policies that promote racial segregation. In addition, rising income inequality between blacks and whites since the 1970s have created affluent neighborhoods that tend to be composed of a homogeneous racial background of families within the same income bracket. A similar situation within the racial lines have helped to explain how more than 32% of blacks now live in suburbs. However, these new suburbs are often divided along racial lines, and a 1992 survey showed that 82% of blacks preferred to live a suburb where their race is in the majority. This is further aggravated by practices like racial steering, in which realtors guide home buyers towards neighborhood based on race. Transportation Government policies that have tended to promote spatial inequalities include actions by the Federal Housing Administration (FHA) in the United States in promoting redlining, a practice where mortgages could be selectively administered while excluding certain urban neighborhood deemed risky, oftentimes because of race. Practices like this continued to prevent home buyers from getting mortgages in redlined areas until the 1960s, when the FHA discontinued the determination of restrictions based on racial composition. The advent of freeways also added a complex layer of incentives and barriers which helped to increase spatial inequalities. First, these new networks allowed for middle-class families to move out to the suburbs while retaining connections like employment to the urban center. Second, and perhaps more importantly, freeways were routed through minority neighborhoods, oftentimes creating barriers between these neighborhoods and central business districts and middle class areas. Highway plans often avoided a more direct route through upper or middle class neighbors because minorities did not have sufficient power to prevent such actions from happening. Solutions Douglas Steven Massey identifies three goals specifically for the United States to end residential segregation: reorganize the structure of metropolitan government, make greater investment in education, and finally open housing market so full participation More specifically, he advocates broader, metropolitan-wide units of taxation and governance where the tax base and decisions are made equally by both the urban and suburban population. Education is the key to closing employment inequalities in a post-manufacturing era. And finally, the federal government must take large strides towards enforcing the anti-segregation measures related to housing it has already put into place, like the Fair Housing Act, the Home Mortgage Disclosure Act, and the Community Reinvestment Act. Another set of divisions that may be useful in framing policy solutions include three categories: place-based policies, people-based policies, and indirect approaches. Place-based policies include improving community facilities and services like schools and public safety in inner-city areas in an effort to appeal to middle-class families. These programs must be balanced with concerns of gentrification. People-based policies help increase access to credit for low-income families looking to move, and this sort of policy has been typified by the Community Reinvestment Act and its many revisions throughout its legislative history. Finally, indirect approaches often involve providing better transportation options to low-income areas, like public transit routes or subsidized car ownership. These approaches target the consequences rather than the causes of segregation, and rely on the assumption that one of the most harmful effects of spatial inequality is the lack of access to employment opportunities. In conclusion, a common feature in all of these is the investment in the capital and infrastructure of inner-city or neighborhood. Healthcare The quality of healthcare that a patient receives strongly depends upon its accessibility. Kelley et al. define access to healthcare as “the timely use of personal health services to achieve the best health outcomes”. Health disparities, which are largely caused by unequal access to healthcare, can be defined as “a difference in which disadvantaged social groups such as the poor, racial/ethnic minorities, women and other groups who have persistently experienced social disadvantage or discrimination systematically experience worse health or greater health risks than most advantaged social groups.” Manifestations of inequality in healthcare appear throughout the world and are a topic of urgency in the United States. In fact, studies have shown that income-related inequality in healthcare expenditures favors the wealthy to a greater degree in the United States than most other Western nations. The enormous costs of healthcare, coupled with the vast number of Americans lacking health insurance, indicate the severe inequality and serious problems that exist. The healthcare system in the United States perpetuates inequality by “rationing health care according to a person’s ability to pay, by providing inadequate and inferior health care to poor people and persons of color, and by failing to establish structures that can meet the health needs of Americans”. Racial Racial disparity in access and quality of healthcare is a serious problem in the United States and is reflected by evidence such as the fact that African American life expectancies lag behind that of whites by over 5 years, and African Americans tend to experience more chronic conditions. African Americans have a 30% higher death rate from cardiovascular disease and experience 50% more diabetic complications than their white counterparts. The Agency for Healthcare Research and Quality (AHRQ), directed by Congress, led an effort for the development of two annual reports by the Department of Health and Human Services (DHHS), the National Healthcare Quality Report and the National Healthcare Disparities Report, which tracked disparities in healthcare in relation to racial and socioeconomic factors. These reports developed about 140 measures of quality of care and about 100 measures of access to care, which were used to measure the healthcare disparities. The first reports, released in December 2003, found that blacks and Hispanics experienced poorer healthcare quality for about half of the quality measures reported in the NHQR and NDHR. Also, Hispanics and Asians experienced poorer access to care for about two thirds of the healthcare access measures. Recent studies on Medicare patients show that black patients receive poorer medical care than their white counterparts. Compared with white patients, blacks receive far fewer operations, tests, medications and other treatments, suffering greater illnesses and more deaths as a result. Measures done by the Agency for Healthcare Research and Quality (AHRQ) show that “fewer than 20% of disparities faced by Blacks, AI/ANs and Hispanics showed evidence of narrowing.” One specific study showed that African Americans are less likely than whites to be referred for cardiac catheterization and bypass grafting, prescription of analgesia for pain control, and surgical treatment of lung cancer. Both African Americans and Latinos also receive less pain medication than whites for long bone fractures and cancer. Other studies showed that African Americans are reported to receive fewer pediatric prescriptions, poorer quality of hospital care, fewer hospital admissions for chest pain, lower quality of prenatal care, and less appropriate management of congestive heart failure and pneumonia. Language-barriers became a large factor in the process of seeking healthcare due to the rise in minorities across the United States. In 2007, an estimate done by the Census Bureau stated that 33.6% of the United States belonged to racial ethnicities other than non-Hispanic whites. Of people within the United States during this time, 20% spoke a language different from English at home. Having a language-barrier can cause many hurdles when pertaining to healthcare: difficulty communicating with health professionals, sourcing/the funding of language assistance, having little to no access to translators, etc. A 2050 projection showed that over 50% of the United States would belong to a racial category other than non-Hispanic white. Thus, demonstrating the rapid increase of minorities over time within the United States and the importance of it. Gender In addition to race, healthcare inequality also manifests across gender lines. Though women tend to live longer than men, they tend to report poorer health status, more disabilities as they age, and tend to be higher utilizers of the healthcare system. Healthcare disparities often put women at a disadvantage. Such time must be scheduled around work (whether formal or informal), child care needs, and the geography—which increases the travel time necessary for those who do not live near healthcare facilities. Furthermore, “poor women and their children tend to have inadequate housing, poor nutrition, poor sanitation, and high rates of physical, emotional, and sexual abuse.” Since women and children constitute 80% of the poor in the United States, they are particularly susceptible to experiencing the negative impact of healthcare inequality. Spatial Spatial inequalities in distribution and geographic location also affect access and quality of healthcare. A study done by Rowland, Lyons, and Edwards (1988) found that rural patients were more likely to be poor and uninsured. Because of the fewer healthcare resources available in rural areas, these patients received fewer medical services than urban patients. Other studies showed that African Americans and Hispanics are more likely than whites to live in areas that are underserved by healthcare providers, forcing them to wait longer for care in crowded and/or understaffed facilities or traveling longer distances to receive care in other areas. This travel time often poses an obstacle to receiving medical care and often leads patients to delay care until later. In fact, African Americans and Hispanics are more likely than whites to delay seeking medical care until their condition becomes serious, rather than seeking regular medical care, because travel and wait times are both costly and an interference in other daily activities. An individual's environment greatly impacts his or her health status. For example, three of the five largest landfills in the United States are situated in communities which are predominantly African American and Latino, contributing to some of the highest pediatric asthma rates in those groups. Impoverished individuals who find themselves unable to leave their neighborhoods consequently are continuously exposed to the same harmful environment, which negatively impacts health. Economic Socioeconomic background is another source of inequality in healthcare. Poverty significantly influences the production of disease since poverty increases the likelihood of having poor health in addition to decreasing the ability to afford preventive and routine healthcare. Lack of access to healthcare has a significant negative impact on patients, especially those who are uninsured, since they are less likely to have a regular source of care, such as a primary care physician, and are more likely to delay seeking care until their condition becomes life-threatening. Studies show that people with health insurance receive significantly more care than those who are uninsured, the most vulnerable groups being minorities, young adults, and low-income individuals. The same trend for uninsured versus insured patients holds true for children as well. Hadley, Steinberg, and Feder (1991) found that hospitalized patients who are not covered under health insurance are less likely to receive high-cost, specialized procedures and as a result, are more likely to die while hospitalized. Feder, Hadley, and Mullner (1984) noticed that hospitals often ration free care by denying care to those who are unable to pay and cutting services commonly used by the uninsured poor. Minorities are less likely to have health insurance because are less likely to occupy middle to upper income brackets, and therefore are incapable of purchasing health insurance, and also because they tend to hold low-paying jobs that do not provide health insurance as part of their job-related benefits. Census data show that 78.7% of whites are covered by private insurance compared with 54% of blacks and 51% of Hispanics. About 29% of Hispanics in the United States have neither private nor government health insurance of any kind. A study done on Medicare recipients also showed that despite the uniform benefits offered, high-income elderly patients received 60% more physician services and 45% more days of hospital care than lower-income elderly patients not covered by Medicaid. After adjustment for health status, people with higher incomes are shown to have higher expenditures, indicating that the wealthy are strongly favored in income-related inequality in medical care. However, this inequality differs across age groups. Inequality was shown to be greatest for senior citizens, then adults, and least for children. This pattern showed that financial resources and other associated attributes, such as educational attainment, were very influential in access and utilization of medical care. Solutions The acknowledgement that access to health services differed depending on race, geographic location, and socioeconomic background was an impetus in establishing health policies to benefit these vulnerable groups. In 1965, specific programs, such as Medicare and Medicaid, were implemented in the United States in an attempt to extend health insurance to a greater portion of the population. Medicare is a federally funded program that provides health insurance for people aged 65 or older, people younger than 65 with certain disabilities, and people of any age who have End-Stage Renal Disease (ERSD). Medicaid, on the other hand, provides health coverage to certain low income people and families and is largely state-governed. However, studies have shown that for-profit hospitals tend to make healthcare less accessible to uninsured patients in addition to those under Medicaid in an effort to contain costs. Another program, the State Children's Health Insurance Program (SCHIP) provides low cost health insurance to children in families who do not qualify for Medicaid but cannot afford private health insurance on their own. The necessity of achieving equity in quality of and access to healthcare is glaring and urgent. According to Fein (1972), this goal could include equal health outcomes for all by income group, equal expenditures per capita across income groups, or eliminating income as a healthcare rationing device. Some have proposed that a national health insurance plan with comprehensive benefits and no deductibles or other costs from the patients would provide the most equity. Fein also stressed that healthcare reform was needed, specifically in eliminating financial assistance to treat patients that depended on patient income or the quantity of services given. He proposed instead paying physicians on a salaried basis. Another study, by Reynolds (1976), found that community health centers improved access to health care for many vulnerable groups, including youth, blacks, and people with serious diseases. The study indicated that community health centers provided more preventive care and greater continuity of care, though there were problems in obtaining adequate funding as well as adequate staffing. Engaging the community to understand the link between social issues such as employment, education, and poverty can help motivate community members to advocate for policies that improve health status. Increasing the racial and ethnic diversity of healthcare providers can also serve as a potential solution. Racial and ethnic minority healthcare providers are much more likely than their white counterparts to serve minority communities, which can have many positive effects. Advocating for an increase in minority healthcare providers can help improve the quality of patient-physician communication as well as reduce the crowding in understaffed facilities in areas in which minorities reside. This can help decrease wait times as well as increase the likelihood that such patients will seek out nearby healthcare facilities rather than traveling farther distances as a last resort. Implementing efforts to increase translation services can also improve quality of healthcare. This means increased availability of bilingual and bicultural healthcare providers for non-English speakers. Studies show that non-English speaking patients self-reported better physical functioning, psychological well-being, health perceptions, and lower pain when receiving treatment from a physician who spoke their language. Hispanic patients specifically reported increased compliance to treatment plans when their physician spoke Spanish and also shared a similar background. Training programs to improve and broaden physicians’ communication skills can increase patient satisfaction, patient compliance, patient participation in treatment decisions, and utilization of preventative care services The idea of universal health care, which is implemented in many other countries, has been a subject of heated debate in the United States. Employment Employment is a key source of income for a majority of the world's population, and therefore is the most direct method through which people can escape poverty. However, unequal access to decent work and persistent labor market inequalities frustrate efforts to reduce poverty. Studies have further divided employment segregation into two categories: first generation and second generation discrimination. First generation discrimination occurs as an overt bias displayed by employers, and since the end of the civil rights era has been on the decline. Second generation discrimination; on the other hand, is less direct and therefore much harder to legislate against. This helps explain the disparity between female hiring rates and male/female ratios, which have gone up recently, and the relative scarcity of women in upper-level management positions. Therefore, while there is extensive legislation passed regarding employment discrimination, informal barriers still exist in the workplace. For instance, gender discrimination often takes the form of working hours and childcare-related benefits. In many cases, female professionals who must take maternity leave or single mothers who must care for their children often are at a disadvantage when it comes to promotions and advancement. Education level Employment discrimination is also closely linked to education and skills. One of the most important factors that can help describe employment disparities was that for much of the post-WWII-era, many Western countries began shedding the manufacturing jobs that provided relatively high-wage jobs to people with moderate to low job skills. Starting from the 1960s, the United States began a shift away from low-wage jobs, especially in the manufacturing sector, towards technology-based or service-based employment. This had an unbalanced effect of decreasing employment opportunities for the least educated in the labor force while at the same time increasing the productivity and therefore wages of the skilled labor force, increasing the level of inequality. In addition, globalization has tended to compound this decrease in demand of domestic unskilled labor. Finally, weak labor market policies since the 70's and 80's have failed to address the income inequalities that those who are employed at lower income levels have to face. Namely, the union movement began to shrink, decreasing the power for employees to negotiate employment terms, and the minimum wage was prevented from increasing alongside inflation. Racial Other barriers include human capital occupations that require an extensive network for developing clientele, like lawyers, physicians, and salesmen. Studies have shown that for blacks and whites in the same occupation, whites can often benefit for a wealthier pool of clients and connections. In addition, studies show that only a small percentage of low-skilled employees are hired through advertisements or cold calls, highlighting the importance of social connections with middle- and upper- class employers. Furthermore, racially disparate employment consequences can arise from racial patterns in other social processes and institutions, such as criminal justice contact (often with spillover effects on local communities of color). At the county level, for example, jail incarceration has been found to significantly diminish local labor markets in areas with relatively high proportions of Black residents. Gender Though women have become an increasing presence in the workforce, there currently exists a gender gap in earnings. Statistics show that women who work full-time year-round earn 75% of the income as their male counterparts. Part of the gender gap in employment earnings is due to women concentrating in different occupational fields than men, which is known as occupational segregation. The 1990 Census data show that more than 50% of women would have to change jobs before women would be distributed in the same way as men within the job market, achieving complete gender integration. This can be attributed to the tendency of women to choose degrees that funnel into jobs that are less lucrative than those chosen by men. Other studies have shown that the Hay system, which evaluates jobs, undervalues the occupations that tend to be filled by women, which continues to bias wages against women's work. Once a certain job becomes associated with women, its social value decreases. Almost all studies show that the percentage of women is correlated with lower earnings for both males and females even in fields that required significant job skills, which suggests a strong effect of gender composition on earnings. Additionally, women tend to be hired into less desirable jobs than men and are denied access to more skilled jobs or jobs that place them in an authoritative role. In general, women tend to hold fewer positions of power when compared to men. A study done by Reskin and Ross (1982) showed that when tenure and productivity-related measurements were controlled, women had less authority and earned less than men of equal standing in their occupation. Exclusionary practices provide the most valuable job openings and career opportunities for members of groups of higher status which, in the United States, mostly means Caucasian males. Therefore, males are afforded more advantages than females and perpetuate this cycle while they still hold more social power, allocating lower-skilled and lower-paying jobs to females and minorities. Inequality in investment of skills Another factor of the gender earnings gap is reflected in the difference in job skills between women and men. Studies suggest that women invest less in their own occupational training because they stay in the workforce for a shorter period of time than men (because of marriage or rearing children) and therefore have a shorter time span to benefit from their extra efforts. However, there is also discrimination by the employer. Studies have shown that the earnings gap is also due to employers investing less money in training female employees, which leads to a gender disparity in accessing career development opportunities. Prescribed gender roles Women tend to stay in the workforce for less time than men due to marriage or the time devoted to raising children. Consequently, men are typically viewed as the “breadwinners” of the family, which is reflected in the employee benefits provided in careers that are traditionally occupied by males. A study done by Heidi M. Berggren, assessing the employee benefits provided to nurses (a traditional female career) and automobile mechanics and repairmen (a traditional male career), found that the latter provided more significant benefits such as health insurance and other medical emergency benefits whereas the former provided more access to sick leave with full pay. This outlines the roles allotted to women as the caregivers and the men as the providers of the family which subsequently encourages men to seek gainful employment while encouraging women to have a larger role at home than in the workplace. Many parental leave policies in the US are poorly developed and reinforce the roles of men as the breadwinner and women as the caregiver. Glass ceiling Women have often described subtle gender barriers in career advancement, known as the glass ceiling. This refers to the limited mobility of women in the workforce due to social restrictions that limit their opportunities and affect their career decisions. Solutions A study done by Doorne-Huiskes, den Dulk, and Schippers (1999) showed that in countries with government policy addressing the balance between work and family life, women have high participation in the work force and there is a smaller gender wage gap, indicating that such policy could encourage mothers to stay in their occupations while also encouraging men to take on a greater child-rearing role. Such measure include mandating employers to provide paid parental leave for employees so that both parents can care for children without risk to their careers. Another suggested measure is government-provided day care for children aged 0–6 or financial support for employees to pay for their own child-care. In 1978, the Pregnancy Discrimination Act was passed and amended Title VII of the Civil Rights Act of 1964. This act designated discrimination based on pregnancy, childbirth, or associated medical issues as illegal gender discrimination. The Family and Medical Leave Act, passed in 1993, required employers to give up to twelve weeks of unpaid leave for the birth or adoption of a child and providing care for immediate family members who are ill. These two acts helped publicize the important role women play in caring for family members and gave women more opportunities to retain jobs that they would have previously lost. However, the Family and Medical Leave Act of 1993 is limited in that only 60% of all employees in the U.S. are eligible for this leave since many small business are exempt from such coverage. The fact that parental leave measures continue to enforce traditional division of labor between the genders indicates a need to reduce the stigma of male parenting as well as the stigma of parenthood on female employment opportunities. Some possible developments to improve parental leave include: offering job protection, full benefits, and substantial pay as a part of parental leave to heighten the social value of both parents caring for children, making parental leave more flexible so that both parents can take time off, reducing the negative impact of parental leave on job standing, and encouraging fathers to care for children by providing educational programs regarding pre-natal and post-natal care. References Sociological terminology Social inequality
0.77835
0.968545
0.753867
Deschooling Society
Deschooling Society is a 1971 book written by Austrian priest Ivan Illich that critiques the role and practice of education in the modern world. Summary Deschooling Society begins as a polemical work that then proposes suggestions for changes to education in society and learning in individual lifetimes. For example, he calls for the use of advanced technology to support "learning webs", which incorporate "peer-matching networks", where descriptions of a person's activities and skills are mutually exchanged for the education that they would benefit from. Illich argued that, with an egalitarian use of technology and a recognition of what technological progress allows, it would be warranted to create decentralized webs that would support the goal of a truly equal educational system: Illich proposes a system of self-directed education in fluid and informal arrangements, which he describes as "educational webs which heighten the opportunity for each one to transform each moment of his living into one of learning, sharing, and caring." Furthermore, he states:The final sentence, above, clarifies Illich's view that education's institutionalisation fosters society's institutionalisation, and so de-institutionalising education may help de-institutionalize society. Further, Illich suggests reinventing learning and expanding it throughout society and across persons' lifespans. Once again, most influential was his 1971 call for advanced technology to support "learning webs":According to a review in the Libertarian Forum, "Illich's advocacy of the free market in education is the bone in the throat that is choking the public educators." Yet, unlike libertarians, Illich opposes not merely publicly funded schooling, but schools as such. Thus, Illich's envisioned disestablishment of schools aimed not to establish a free market in educational services, but to attain a fundamental shift: a deschooled society. In his 1973 book After Deschooling, What?, he asserted, "We can disestablish schools, or we can deschool culture." In fact, he called advocates of free-market education "the most dangerous category of educational reformers." Learning Networks Developing this idea, Illich proposes four Learning Networks: Reference Service to Educational Objects - An open directory of educational resources and their availability to learners. Skills Exchange - A database of people willing to list their skills and the basis on which they would be prepared to share or swap them with others. Peer-Matching - A network helping people to communicate their learning activities and aims in order to find similar learners who may wish to collaborate. Directory of Professional Educators - A list of professionals, paraprofessionals and free-lancers detailing their qualifications, services and the terms on which these are made available. See also References External links . MP3 version of the book, read for the Unwelcome Guests radio show Ivan Illichs “Deschooling society” verstehen oder mißverstehen?, June 1, 2016, Bertrand Stern 1971 non-fiction books Books about education Alternative education Education reform Books in philosophy of technology Pedagogical publications
0.765697
0.984516
0.753841
Third Space Theory
The Third Space is a postcolonial sociolinguistic theory of identity and community realized through language. It is attributed to Homi K. Bhabha. Third Space Theory explains the uniqueness of each person, actor or context as a "hybrid". See Edward W. Soja for a conceptualization of the term within the social sciences and from a critical urban theory perspective. Origins Third Space theory emerges from the sociocultural tradition in psychology identified with Lev Vygotsky. Sociocultural approaches are concerned with the "... constitutive role of culture in mind, i.e., on how mind develops by incorporating the community's shared artifacts accumulated over generations". Bhabha applies socioculturalism directly to the postcolonial condition, where there are, "... unequal and uneven forces of cultural representation". Wider use In discourse of dissent, the Third Space has come to have two interpretations: that space where the oppressed plot their liberation: the whispering corners of the tavern or the bazaar that space where oppressed and oppressor are able to come together, free (maybe only momentarily) of oppression itself, embodied in their particularity. In educational studies, Maniotes examined literary Third Space in a classroom where students' cultural capital merged with content of the curriculum as students backed up their arguments in literature discussions. Skerrett associates it with a multiliteracies approach. Pre-school: Third Space Theory has been applied to the prespace within which children learn to read, bringing domestic and school literacy practices into their own constructions of literacy. Another contemporary construction of three "spaces" is that one space is the domestic sphere: the family and the home; a second space is the sphere of civic engagement including school, work and other forms of public participation; and set against these is a Third Space where individual, sometimes professional, and sometimes transgressive acts are played out: where people let their "real" selves show. Sporting associations may be labeled as Third Space. Often bars and nightclubs are so labeled (Law 2000, 46–47). Latterly the term Third Space has been appropriated into brand marketing where domestic spaces and workforce-engagement spaces are set against recreational retail space: shopping malls as third spaces (see Third place, Postrel 2006; and see also Davis 2008). Bill Thompson (2007) offers an opposite conceptualisation of Third Space as public, civic space in the built environment under pressure from shopping malls and corporate enterprises, transforming public space into an extension of the market. Higher education: The Third Space is used by Whitchurch to describe a subset of staff in Higher Education that work in roles which cross the boundaries of professional/administrative and academic spheres, providing expert advice relating to learning and teaching without being practitioners. These include Learning/Instructional Designers and Education Technologists, among others. Explanatory and predictive use Third Space Theory can explain some of the complexity of poverty, social exclusion and social inclusion, and might help predict what sort of initiatives would more effectively ameliorate poverty and exclusion. Bonds of affinity (class, kin, location: e.g. neighbourhood, etc.) can function as "poverty traps". Third Space Theory suggests that every person is a hybrid of their unique set of affinities (identity factors). Conditions and locations of social and cultural exclusion have their reflection in symbolic conditions and locations of cultural exchange. It appears to be accepted in policy that neither social capital nor cultural capital, alone or together, are sufficient to overcome social exclusion. Third Space Theory suggests that policies of remediation based in models of the Other are likely to be inadequate. See also Third place Hybridity World-systems theory Post-colonial theory Border References Postcolonialism Sociolinguistics
0.760547
0.99116
0.753824
Geniocracy
Geniocracy is the framework for a system of government which was first proposed by Raël (leader of the International Raëlian Movement) in 1977 and which advocates a certain minimal criterion of intelligence for political candidates and also the electorate. Definition The term geniocracy comes from the word genius, and describes a system that is designed to select for intelligence and compassion as the primary factors for governance. While having a democratic electoral apparatus, it differs from traditional liberal democracy by instead suggesting that candidates for office and the body electorate should meet a certain minimal criterion of problem-solving or creative intelligence. The thresholds proposed by the Raëlians are 50% above the mean for an electoral candidate and 10% above the mean for an elector. Notably, if the distribution of intelligence is assumed to be symmetric (as it is for the IQ), this would imply that the majority of population has no right to vote. Justifying the method of selection This method of selectivity is deliberate so as to address what the concept considers to be flaws in the current systems of democracy. The primary object of criticism is the inability of majoritarian consensus to provide a reasonable platform for intelligent decision-making for the purpose of solving problems permanently. Geniocracy's criticism of this system is that the institutions of democracy become more concerned with appealing to popular consensus through emotive issues than they are in making long-term critical decisions, especially those that may involve issues that are not immediately relevant to the electorate. It asserts that political mandate is something that is far too important to simply leave to popularity, and asserts that the critical decision-making that is required for government, especially in a world of globalization, cannot be based upon criteria of emotive or popular decision-making. In this respect, geniocracy derides liberal democracy as a form of "mediocracy". In a geniocracy, Earth would be ruled by a worldwide geniocratic government. Agenda Part of the geniocratic agenda is to promote the idea of a world government system, deriding the current state-system as inadequate for dealing with contemporary global issues that are typical of globalisation, such as environmentalism, social justice, human rights, and the current economic system. In line with this, geniocracy proposes a different economic model(where it is given a name called Humanitarianism in the book Intelligent Design: Message from the Designers). Response to criticism As a response to its controversial attitudes about selectivity, one of the more general responses is to point out that universal suffrage, the current system, already discriminates to some degree and varyingly in different countries as to who is allowed to vote. Primarily, this discrimination is against women, minority racial groups, refugees, immigrants, minority religious groups, minority ethnic groups, minors, elderly people, those living in poverty and homelessness, incarcerated and previously incarcerated people, and the mentally or physically incapacitated. This is on the basis that their ability to contribute to the decision-making process is either flawed or invalid for the purpose of the society. Status The current difficulty in the ideas of geniocracy is that the means of assessing intelligence are ill-defined. One idea offered by Raël in Geniocracy is to have specialists such as psychologists, neurologists, ethnologists, etc., perfect or choose among existing ones, a series of tests that would define each person's level of intelligence. They should be designed to measure intellectual potential rather than accumulation of knowledge. Some argue other components deemed necessary for a more rounded understanding of intelligence include concepts like emotional intelligence. As such, geniocracy's validity cannot really be assessed until better and more objective methods of intelligence assessment are made available. The matter of confronting moral problems that may arise is not addressed in the book Geniocracy; many leaders may be deeply intelligent and charismatic (having both high emotional/social intelligence and IQ) according to current means of measuring such factors, but no current scientific tests are a reliable enough measure for one's ability to make humanitarian choices (although online tests such as those used by retail chains to select job applicants may be relevant). The lack of scientific rigour necessary for inclusion of geniocracy as properly testable political ideology can be noted in number of modern and historical dictatorships as well as oligarchies. Because of the controversies surrounding geniocracy, Raël presents the idea as a classic utopia or provocative ideal and not necessarily a model that humanity will follow. Democratically defined regions The author of Geniocracy recommends (though does not necessitate) a world government with 12 regions. Inhabitants would vote for which region they want to be part of. After the regions are defined, they are further divided into 12 sectors after the same principle of democracy is applied. While sectors of the same region are defined as having equal numbers of inhabitants, the regions themselves may have different levels of population, which would be proportional to its voting power. See also Idiocracy (a dark comedy film) depicts the United States in 2505 where the vast majority are mentally backwards (by current standards) despite widespread use of IQ tests. Superman: Red Son ends with Lex Luthor establishing a utopian but elitist world government under the philosophy of "Luthorism" which is essentially a geniocracy run by Luthor and other geniuses. Plato's Republic Meritocracy Netocracy Noocracy Transhumanism Technocracy Notes References Rael, La géniocratie . L'Edition du message, 1977. . Rael, Geniocracy: Government of the People, for the People, by the Geniuses . Nova Distribution, 2008. Further reading External links Geniocracy.org Geniocracy Review on RaelNews Geniocracy piece on RaelRadio 'Geniocracy is the solution' - article on Raelnews Raëlian practices Religious texts Books about human intelligence
0.763452
0.987371
0.753811
Radical constructivism
Radical constructivism is an approach to epistemology that situates knowledge in terms of knowers' experience. It looks to break with the conception of knowledge as a correspondence between a knower's understanding of their experience and the world beyond that experience. Adopting a skeptical position towards correspondence as in principle impossible to verify because one cannot access the world beyond one's experience in order to test the relation, radical constructivists look to redefine epistemology in terms of the viability of knowledge within knowers' experience. This break from the traditional framing of epistemology differentiates it from "trivial" forms of constructivism that emphasise the role of the knower in constructing knowledge while maintaining the traditional perspective of knowledge in terms of correspondence. Radical constructivism has been described as a "post-epistemological" position. Radical constructivism was initially formulated by Ernst von Glasersfeld, who drew on the work of Jean Piaget, Giambattista Vico, and George Berkeley amongst others. Radical constructivism is closely related to second-order cybernetics, and especially the work of Heinz von Foerster, Humberto Maturana, and Francisco Varela. During the 1980s, Siegfried J. Schmidt played a leading role in establishing radical constructivism as a paradigm within the German speaking academic world. Radical constructivism has been influential in educational research and the philosophy of science. Constructivist Foundations is a free online journal publishing peer-reviewed articles on radical constructivism by researchers from multiple domains. References Further reading Foerster, H. von, & Poerksen, B. (2002). Understanding systems (K. Leube, Trans.). Kluwer Academic. Glanville, R. (2007). The importance of being Ernst. Constructivist Foundations, 2(2/3), 5-6. http://constructivist.info/2/2-3/005.glanville Glasersfeld, E. von (1995). Radical constructivism: A way of knowing and learning. Routledge Falmer. Glasersfeld, E. von. (1984). An introduction to radical constructivism. In P. Watzlawick (Ed.), The invented reality (pp. 17-40). Norton. http://www.vonglasersfeld.com/070.1 Glasersfeld, E. von. (1990). An exposition of constructivism: Why some like it radical. Journal for Research in Mathematics Education Monograph, 4, 19-29. https://doi.org/10.2307/749910 Poerksen, B. (2004). The Certainty of Uncertainty: Dialogues Introducing Constructivism. Ingram Pub Services. Epistemological theories Cybernetics Constructivism
0.774591
0.97317
0.753809
Fundamental theorem of software engineering
The fundamental theorem of software engineering (FTSE) is a term originated by Andrew Koenig to describe a remark by Butler Lampson attributed to David J. Wheeler: The theorem does not describe an actual theorem that can be proven; rather, it is a general principle for managing complexity through abstraction. The theorem is often expanded by the humorous clause "…except for the problem of too many levels of indirection," referring to the fact that too many abstractions may create intrinsic complexity issues of their own. For example, the use of protocol layering in computer networks, which today is ubiquitous, has been criticized in ways that are typical of more general disadvantages of abstraction. Here, the adding of extra levels of indirection may cause higher layers to duplicate the functionality of lower layers, leading to inefficiency, and functionality at one layer may need data present only at another layer, which fundamentally violates the goal of separation into different layers. References Software engineering folklore
0.760522
0.991116
0.753765
Moral high ground
The moral high ground, in ethical or political parlance, refers to the status of being respected for remaining moral, and adhering to and upholding a universally recognized standard of justice or goodness. In derogatory context, the term is often used to metaphorically describe a position of self-righteousness. "Parties seeking the moral high ground simply refuse to act in ways which are not viewed as legitimate and morally defensible." Politics Holding the moral high ground can be used to legitimize political movements, notably nonviolent resistance, especially in the face of violent opposition, and has been used by civil disobedience movements around the world to garner sympathy and support from society. Business Economist and social critic Robert H. Frank challenged the idea that prosocial behavior was necessarily deleterious in business in his book What Price the Moral High Ground? He argued that socially responsible firms often reap unexpected benefits even in highly competitive environments, because their commitment to principle makes them more attractive as partners to do business with. Everyday use In everyday use a person may take the perspective of the 'moral high ground' in order to produce a critique of something, or merely to win an argument. This perspective is sometimes associated to snobbery but may also be a legitimate way of taking up a stance. Social sciences or philosophies are sometimes accused of taking the 'moral high ground' because they are often inherently interested in the project of human freedom and justice. The traditional project of education itself may be seen as defending a type of moral high ground from popular culture, perhaps by using critical pedagogy: its proponents may themselves be accused (rightly or wrongly) of seeking a false and unjustified sense of superiority thereby. See also Critical pedagogy Moral hierarchy Political posturing Virtue signalling References High ground Political science
0.766795
0.982974
0.75374
Theory of generations
Theory of generations (or sociology of generations) is a theory posed by Karl Mannheim in his 1928 essay, "Das Problem der Generationen," and translated into English in 1952 as "The Problem of Generations." This essay has been described as "the most systematic and fully developed" and even "the seminal theoretical treatment of generations as a sociological phenomenon". According to Mannheim, people are significantly influenced by the socio-historical environment (in particular, notable events that involve them actively) of their youth; giving rise, on the basis of shared experience, to social cohorts that in their turn influence events that shape future generations. Because of the historical context in which Mannheim wrote, some critics contend that the theory of generations centers on Western ideas and lacks a broader cultural understanding. Others argue that the theory of generations should be global in scope, due to the increasingly globalized nature of contemporary society. Theory Mannheim defined a generation (note that some have suggested that the term cohort is more correct) to distinguish social generations from the kinship (family, blood-related generations) as a group of individuals of similar ages whose members have experienced a noteworthy historical event within a set period of time. According to Mannheim, social consciousness and perspective of youth reaching maturity in a particular time and place (what he termed "generational location") is significantly influenced by the major historical events of that era (thus becoming a "generation in actuality"). A key point, however, is that this major historical event has to occur, and has to involve the individuals in their young age (thus shaping their lives, as later experiences will tend to receive meaning from those early experiences); a mere chronological contemporaneity is not enough to produce a common generational consciousness. Mannheim in fact stressed that not every generation will develop an original and distinctive consciousness. Whether a generation succeeds in developing a distinctive consciousness is significantly dependent on the pace of social change ("tempo of change"). Mannheim notes also that social change can occur gradually, without the need for major historical events, but those events are more likely to occur in times of accelerated social and cultural change. Mannheim did also note that the members of a generation are internally stratified (by their location, culture, class, etc.), thus they may view different events from different angles and thus are not totally homogenous. Even with the "generation in actuality", there may be differing forms of response to the particular historical situation, thus stratifying by a number of "generational units" (or "social generations"). Application Mannheim's theory of generations has been applied to explain how important historical, cultural, and political events of the late 1950s and the early 1960s educated youth of the inequalities in American society, such as their involvement along with other generations in the Civil Rights Movement, and have given rise to a belief that those inequalities need to be changed by individual and collective action. This has pushed an influential minority of young people in the United States toward social movement activity. On the other hand, the generation which came of age in the later part of the 1960s and 1970s was much less engaged in social movement activity, because - according to the theory of generations - the events of that era were more conducive to a political orientation stressing individual fulfillment instead of participation in such social movements questioning the status quo. Other notable applications of Mannheim's theory that illustrate the dynamics of generational change include: The effects of the Great Depression in the U.S. on young people's orientations toward work and politics How the Nazi regime in Germany affected young Germans' political attitudes Collective memories of important historical events that happen during late adolescence or early adulthood Changing patterns of civic engagement in the U.S. The effects of coming of age during the second-wave feminist movement in the U.S. on feminist identity Explaining the rise of same-sex marriage in the United States The effects of the Chinese Cultural Revolution on youth political activism Social generation studies have mainly focused on the youth experience from the perspective of the Western society. "Social generations theory lacks ample consideration of youth outside of the West. Increased empirical attention to non-Western cases corrects the tendency of youth studies to 'other' non-Western youth and provides a more in-depth understanding of the dynamics of reflexive life management." The constraints and opportunities affecting a youth's experiences within particular sociopolitical contexts require research to be done in a wide array of spaces to better reflect the theory and its implications on youth's experiences. Recent works discuss the difficulty of managing generational structures as global processes, proceeding to design glocal structures. See also Generation Strauss-Howe generation theory Sociology of aging Sociology of knowledge References 1923 in science Cultural generations Sociological theories
0.764165
0.98634
0.753727
Contextualization (sociolinguistics)
Contextualization in sociolinguistics refers to the use of language (both spoken language and body language) to signal relevant aspects of an interaction or communicative situation. This may include clues to who is talking, their relationship, where the conversation is occurring, and much more. These clues can be drawn from how the language is being used, what type of language is being used (formal versus informal), and the participants tone of voice (Andersen and Risør 2014). Contextualization includes verbal and non-verbal clues of things such as the power dynamic or the situation apparent from a conversation being analyzed or participated in. These clues are referred to as "contextualization cues". Contextualization cues are both verbal and non-verbal signs that language speakers use and language listeners hear that give clues into relationships, the situation, and the environment of the conversation (Ishida 2006). An example of contextualization in academia is the work of Basil Bernstein (1990 [1971]). Bernstein describes the contextualization of scientific knowledge in pedagogical contexts, such as textbooks. It is important to note that contextualization in relation to sociolinguistics only examines how language is being used. This is because sociolinguistics is the study of how society uses language. Contextualization cues As previously mentioned, contextualization cues are a crucial in that they are the clues that allow observers to better understand the interaction being presented. Some contextualization cues include: intonation, accents, body language, type of language, and facial expressions (Andersen and Risør 2014). Intonation refers to the rise and fall of speech. By observing this, excitement, anger, interest, or other emotions can be determined. Accents indicate a person's place of origin, so in a conversation this can give clues to not only where a person is from but also the values or cultural beliefs. Furthermore, when body language and facial expressions are combined, more clues about the relationship of the speaker, their feelings towards the topic or other participant, or emotions become evident (Ducharme and Bernard 2001). Finally, whether a person uses formal or informal language, allows the relationship between the two speakers to be clear. Most likely, when an interaction between two people who are peers and/or familiar with one another will utilize the informal form of language. The reverse is true for people unfamiliar with each other or those in an unequal power dynamic (Masuda 2016). Impact of contextualization Contextualization has the overarching benefit of granting people the ability to understand. Zana Mahmood Hassan details the usefulness of contextualization in his paper, "Language Contextualization and Culture." Contextualization in sociolinguistics can allow those learning a language to begin to understand the culture by the cues found in the nuances of the language (Hassan 2014). Generalized, Hassan's findings reveal that language and context go hand in hand. Scholars have said that it is important to include culture studies into language studies because it aids in students' learning. The informational and situational context that culture provides helps language "make sense"; culture is a contextualization cue (Hassan 2014). In all, contextualization, when implemented properly, can make learning a language easier. Ducharme and Bernard make a similar argument in their article. They say that when students are given the tools and space to utilize contextualization, they are better able to learn a second language (Ducharme and Bernard 2001). Contextualization does not only ease everyday understand of language and language interactions, but it also aids in language learning and comprehension in an academic setting. Contextualization takes language just one step further by proving the intricacies of language and by filling in the gaps. Examples of contextualization in use Example one: John Gumperz John Gumperz (1982a) gives the following example. He suggests that in the following interaction the linguistic style used by the interviewer signals a context different from that expected by the husband. The interviewer, an African-American graduate student in educational psychology, has been sent to interview a woman at her home in a low-income neighborhood. The interviewer rings the door bell and the woman's husband opens the door. Husband: Interviewer: Ah, no. I only came to get some information. They called from the office. The husband addresses the interviewer in an informal style, marking their interaction as friendly. When the interviewer responds in a more formal style, the context becomes more formal. As a result, the interviewer reports that the interview was "stiff" (Gumperz 1982a: 133). Example two: Kyoko Masuda Kyoko Masuda provides another example from a study of conversations between female professors and students in Japan. She found that while students consistently used formal forms of Japanese when talking to professors, professors would often switch between the formal and informal forms depending on the topic of conversation (Masuda 2016). In this example, a student and professor are discussing the cultural difference in education between America and Japan: Student A: Because in Japan, they absolutely can't do that, we (teachers) must teach them, don't we? Professor A: I (definitely) think so, you know. Student A: What else? (American students) do things like eating food and putting their feet on the desk. I don't understand well whether that sort of thing is part of their culture. Professor A: After all, do you mind (their behavior)? Student A: I do mind. (Masuda 2016) In this interaction, the cues received by the student's style of speaking suggests that they are speaking to an authority figure, because they are deferring through the use of questions. Furthermore, you can see the formality in their language throughout the brief interaction. The student speaks in elongated sentences, saying things such as "I don't understand well" rather than just the informal "I don't get it." In examining the professor's use of language, they switch between the informal form ("I (definitely) think so, you know.") and the formal form ("After all, do you mind (their behavior)?"). This suggests that the professor used cues to learn that the student would prefer to remain in the formal form, and molded their language style to fit that. The reverse is seen within the next example: Student B: When students (in Section A) know the answer, they immediately respond. Professor B: Yeah, because they have confidence after all, don't they? Student B: Yeah. Students in Section B are really slow, you know. (Masuda 2016) After listening to the professor speak and seeing the professor utilize the informal form, the student shifted their style of speaking. Student B began by using the formal form, but ended with the informal form after examining the cues presented. References Bernstein, B. (1990). Class, codes and control. Vol. IV. The structuring of pedagogic discourse. London: Routledge. Eerdmans, S., Prevignano, C., & Thibault, P. (2002). Language and interaction. Discussions with J. J. Gumperz. Amsterdam: Benjamins. Gumperz, J. J. (1982a). Discourse strategies. Cambridge: Cambridge University Press. Gumperz, J. J. (Ed.). (1982b). Language and social identity. Cambridge: Cambridge University Press. Ishida, H. (2006). Learners' perception and interpretation of contextualization cues in spontaneous Japanese conversation: Back-channel cue Uun. Journal of Pragmatics, 38(11), 1943-1981. Masuda, K. (2016). Style-shifting in student-professor interactions. Journal of Pragmatics, 101, 101-117. Ducharme, D. and Bernard, R. (2001). Communication breakdowns: an exploration of contextualization in native and non-native speakers of French. Journal of Pragmatics, 33(6) - 825-847. Hassan, Z. M. (2014). Language Contextualization and Culture. Procedia - Social and Behavioral Sciences, 136, 31-35. Sociolinguistics Discourse analysis
0.787885
0.956633
0.753717
Explicit memory
Explicit memory (or declarative memory) is one of the two main types of long-term human memory, the other of which is implicit memory. Explicit memory is the conscious, intentional recollection of factual information, previous experiences, and concepts. This type of memory is dependent upon three processes: acquisition, consolidation, and retrieval. Explicit memory can be divided into two categories: episodic memory, which stores specific personal experiences, and semantic memory, which stores factual information. Explicit memory requires gradual learning, with multiple presentations of a stimulus and response. The type of knowledge that is stored in explicit memory is called declarative knowledge, the counterpart to explicit memory is known as implicit memory, refers to memories acquired and used unconsciously such as skills (e.g. knowing how to get dressed) or perception. Unlike explicit memory, implicit memory learns rapidly, even from a single stimulus, and it is influenced by other mental systems. Sometimes a distinction is made between explicit memory and declarative memory. In such cases, explicit memory relates to any kind of conscious memory, and declarative memory relates to any kind of memory that can be described in words; however, if it is assumed that a memory cannot be described without being conscious and vice versa, then the two concepts are identical. Types Episodic memory Episodic memory consists of the storage and recollection of observational information attached to specific life-events. These can be memories that happened to the subject directly or just memories of events that happened around them. Episodic memory is what people generally think of when they talk about memory. Episodic memory allows for recalling various contextual and situational details of one's previous experiences. Some examples of episodic memory include the memory of entering a specific classroom for the first time, the memory of storing your carry-on baggage while boarding a plane, headed to a specific destination on a specific day and time, the memory of being notified that one are being terminated from their job, or the memory of notifying a subordinate that they are being terminated from their job. The retrieval of these episodic memories can be thought of as the action of mentally reliving in detail the past events that they concern. Episodic memory is believed to be the system that provides the basic support for semantic memory. Semantic memory Semantic memory refers to general world knowledge (facts, ideas, meaning and concepts) that can be articulated and is independent of personal experience. This includes world knowledge, object knowledge, language knowledge, and conceptual priming. Semantic memory is distinct from episodic memory, which is the memory of experiences and specific events that occur during people's lives, from which they can recreate at any given point. For instance, semantic memory might contain information about what a cat is, whereas episodic memory might contain a specific memory of petting a particular cat. Humans can learn about new concepts by applying their knowledge learned from things in the past. Other examples of semantic memory include types of food, capital cities of a geographic region, facts about people, dates, and the lexicon of flowers; a language, such as a one's vocabulary or a person's final vocabulary both exemplify semantic memory. Hybrid types Autobiographical memory is a memory system consisting of episodes recollected from an individual's life, based on a combination of episodic (personal experiences and specific objects, people and events experienced at particular time and place) and semantic (general knowledge and facts about the world) memory. Spatial memory is the part of memory responsible for recording information about one's environment and its spatial orientation. For example, a person's spatial memory is required in order to navigate around a familiar city, just as a rat's spatial memory is needed to learn the location of food at the end of a maze. It is often argued that in both humans and animals, spatial memories are summarized as a cognitive map. Spatial memory has representations within working, short-term and long-term memory. Research indicates that there are specific areas of the brain associated with spatial memory. Many methods are used for measuring spatial memory in children, adults, and animals. Examples The model of language Declarative and procedural memory fall into two categories of human language. Declarative memory system is used by the lexicon. Declarative memory stores all arbitrary, unique word-specific knowledge, including word meanings, word sounds, and abstract representations such as word category. In other words, declarative memory is where random bits and pieces of knowledge about language that are specific and unpredictable are stored. Declarative memory includes representations of simple words (e.g. cat), bound morphemes (morphemes that have to go together), irregular morphological forms, verb complements, and idioms (or non-compositional semantic units). Irregular morphological structures fall into the declarative system; the irregularities (such as went being the past form of go or idioms) are what we have to memorize. Declarative memory supports a superposition associative memory, which allows for generalizations across representations. For example, the memorization of phonologically similar stem-irregular past tense pairs (e.g. spring-sprung, sing-sang) may allow for memory-based generalization to new irregularities, either from real words (bring-brought) or from novel ones (spring-sprung). This ability to generalize could underlie some degree of productivity within the memory system. While declarative memory deals with irregularities of morphology, procedural memory uses regular phonology and regular morphology. Procedural memory system is used by grammar, where grammar is defined by the building of a rule governed structure. Language's ability to use grammar comes from procedural memory, making grammar like another procedure. It underlies the learning of new, and already learned, rule-based procedures that oversee the regularities of language, particularly those procedures related to combining items into complex structures that have precedence and hierarchical relations- precedence in the sense of left to right and hierarchical in the sense of top to bottom. Procedural memory builds rule-governed structure (merging or series) of forms and representations into complex structures such as: Phonology Inflectional and derivational morphology Compositional semantics (the meaning of composition of words into complex structures) Syntax Broca and Wernicke's Brain Region Broca's area is important to procedural memory, because, "Broca's area is involved in the expressive aspects of spoken and written language (production of sentences constrained by the rules of grammar and syntax)." Broca's area corresponds to parts of the inferior frontal gyrus, presumably Brodmann's area 44 and 45. Procedural memory is affected by Broca's aphasia. Agrammatism is apparent in Broca's aphasia patients, where a lack of fluency and omission of morphology and function words occur. While those with Broca's aphasia are still able to understand or comprehend speech, they have difficulty producing it. Speech production becomes more difficult when sentences are complex; for example, the passive voice is a grammatically complex structure that is harder for those with Broca's aphasia to comprehend. Wernicke's area is crucial for language development, focusing on the comprehension of speech, rather than speech production. Wernicke's aphasia affects declarative memory. Opposite of Broca's aphasia, paragrammatism is apparent, which causes normal or excessive fluency and use of inappropriate words (neologisms). Those with Wernicke's aphasia struggle to understand the meaning of words and may not recognize their mistakes in speech. History The study of human memory stretches back over the last 2000 years. An early attempt to understand memory can be found in Aristotle's major treatise, On the Soul, in which he compares the human mind to a blank slate. He theorized that all humans are born free of any knowledge and are the sum of their experiences. It was only in the late 1800s, however, that a young German philosopher by the name of Herman Ebbinghaus developed the first scientific approach to studying memory. While some of his findings have endured and remain relevant to this day (Learning Curve), his greatest contribution to the field of memory research was demonstrating that memory can be studied scientifically. In 1972, Endel Tulving proposed the distinction between episodic and semantic memory. This was quickly adopted and is now widely accepted. Following this, in 1985, Daniel Schacter proposed a more general distinction between explicit (declarative) and implicit (procedural) memory With the recent advances in neuroimaging technology, there have been a multitude of findings linking specific brain areas to declarative memory. Despite those advances in cognitive psychology, there is still much to be discovered in terms of the operating mechanisms of declarative memory. It is unclear whether declarative memory is mediated by a particular memory system, or if it is more accurately classified as a type of knowledge. Also it is unknown how or why declarative memory evolved in the first place. Neuropsychology Normal brain function Hippocampus Although many psychologists believe that the entire brain is involved with memory, the hippocampus, and surrounding structures appear to be most important in declarative memory specifically. The ability to retain and recall episodic memories is highly dependent on the hippocampus, whereas the formation of new declarative memories relies on both the hippocampus and the parahippocampus. Other studies have found that the parahippocampal cortices were related to superior recognition memory. The Three Stage Model was developed by Eichenbaum, et al. (2001), and proposes that the hippocampus does three things with episodic memory: Mediates the recording of episodic memories Identifies common features between episodes Links these common episodes in a memory space. To support this model, a version of Piaget's Transitive Inference Task was used to show that the hippocampus is in fact used as the memory space. When experiencing an event for the first time, a link is formed in the hippocampus allowing us to recall that event in the future. Separate links are also made for features related to that event. For example, when you meet someone new, a unique link is created for them. More links are then connected to that person's link so you can remember what colour their shirt was, what the weather was like when you met them, etc. Specific episodes are made easier to remember and recall by repeatedly exposing oneself to them (which strengthens the links in the memory space) allowing for faster retrieval when remembering. Hippocampal cells (neurons) are activated depending on what information one is exposed to at that moment. Some cells are specific to spatial information, certain stimuli (smells, etc.), or behaviours as has been shown in a Radial Maze Task. It is therefore the hippocampus that allows us to recognize certain situations, environments, etc. as being either distinct or similar to others. However, the Three Stage Model does not incorporate the importance of other cortical structures in memory. The anatomy of the hippocampus is largely conserved across mammals, and the role of these areas in declarative memory are conserved across species as well. The organization and neural pathways of the hippocampus are very similar in humans and other mammal species. In humans and other mammals, a cross-section of the hippocampus shows the dentate gyrus as well as the dense cell layers of the CA fields. The intrinsic connectivity of these areas are also conserved. Results from an experiment by Davachi, Mitchell, and Wagner (2003) and subsequent research (Davachi, 2006) shows that activation in the hippocampus during encoding is related to a subject's ability to recall prior events or later relational memories. These tests did not differentiate between individual test items later seen and those forgotten. Prefrontal cortex The lateral Prefrontal cortex (PFC) is essential for remembering contextual details of an experience rather than for memory formation. The PFC is also more involved with episodic memory than semantic memory, although it does play a small role in semantics. Using PET studies and word stimuli, Endel Tulving found that remembering is an automatic process. It is also well documented that a hemispheric asymmetry occurs in the PFC: When encoding memories, the Left Dorsolateral PFC (LPFC) is activated, and when retrieving memories, activation is seen in the Right Dorsolateral PFC (RPFC). Studies have also shown that the PFC is extremely involved with autonoetic consciousness (See Tulving's theory). This is responsible for humans' recollective experiences and 'mental time travelling' abilities (characteristics of episodic memory). Amygdala The amygdala is believed to be involved in the encoding and retrieval of emotionally charged memories. Much of the evidence for this has come from research on a phenomenon known as flashbulb memories. These are instances in which memories of powerful emotional events are more highly detailed and enduring than regular memories (e.g. September 11 attacks, assassination of JFK). These memories have been linked to increased activation in the amygdala. Recent studies of patients with damage to the amygdala suggest that it is involved in memory for general knowledge, and not for specific information. Other structures involved The regions of the diencephalon have shown brain activation when a remote memory is being recovered and the occipital lobe, ventral temporal lobe, and fusiform gyrus all play a role in memory formation. Lesion studies Lesion studies are commonly used in cognitive neuroscience research. Lesions can occur naturally through trauma or disease, or they can be surgically induced by researchers. In the study of declarative memory, the hippocampus and the amygdala are two structures frequently examined using this technique. Hippocampal lesion studies The Morris water navigation task tests spatial learning in rats. In this test rats learn to escape from a pool by swimming toward a platform submerged just below the surface of the water. Visual cues that surround the pool (e.g. a chair or window) help the rat to locate the platform on subsequent trials. The rats' use of specific events, cues, and places are all forms of declarative memory. Two groups of rats are observed: a control group with no lesions and an experimental group with hippocampal lesions. In this task created by Morris, rats are placed in the pool at the same position for 12 trials. Each trial is timed and the path taken by the rats is recorded. Rats with hippocampal lesions successfully learn to find the platform. If the starting point is moved, the rats with hippocampal lesions typically fail to locate the platform. The control rats, however, are able to find the platform using the cues acquired during the learning trials. This demonstrates the involvement of the hippocampus in declarative memory. The Odor-odor Recognition Task, devised by Bunsey and Eichenbaum, involves a social encounter between two rats (a subject and a demonstrator). The demonstrator, after eating a specific type of food, interacts with the subject rat, who then smells the food odor on the other's breath. The experimenters then present the subject rat with a decision between two food options; the food previously eaten by the demonstrator, and a novel food. The researchers found that when there was no time delay, both control rats and rats with lesions chose the familiar food. After 24 hours, however, the rats with hippocampal lesions were just as likely to eat both types of food, while control rats chose the familiar food. This can be attributed to the inability to form episodic memories due to lesions in the hippocampus. The effects of this study can be observed in humans with amnesia, indicating the role of the hippocampus in developing episodic memories that can be generalized to similar situations. Henry Molaison, previously known as H.M., had parts of both his left and right medial temporal lobes (hippocampi) removed which resulted in the loss of the ability to form new memories. The long-term declarative memory was crucially affected when the structures from the medial temporal lobe were removed, including the ability to form new semantic knowledge and memories. The dissociation in Molaison between the acquisition of declarative memory and other kinds of learning was seen initially in motor learning. Molaison's declarative memory was not functioning, as was seen when Molaison completed the task of repetition priming. His performance does improve over trials, however, his scores were inferior to those of control participants. In the condition of Molaison the same results from this priming task are reflected when looking at the other basic memory functions like remembering, recall and recognizing. Lesions should not be interpreted as an all-or-nothing condition, in the case of Molaison not all memory and recognition is lost, although the declarative memory is severely damaged he still has a sense of self and memories that were developed before the lesion occurred. Patient R.B. was another clinical case reinforcing the role of the hippocampus in declarative memory. After suffering an ischemic episode during a cardiac bypass operation, Patient R.B. awoke with a severe anterograde amnesic disorder. IQ and cognition were unaffected, but declarative memory deficits were observed (although not to the extent of that seen in Molaison). Upon death, an autopsy revealed that Patient R.B. had bilateral lesions of the CA1 cell region along the whole length of the hippocampus. Amygdala lesion studies Adolph, Cahill and Schul completed a study showing that emotional arousal facilitates the encoding of material into long term declarative memory. They selected two subjects with bilateral damage to the amygdala, as well as six control subjects and six subjects with brain damage. All subjects were shown a series of twelve slides accompanied by a narrative. The slides varied in the degree to which they evoked emotion – slides 1 through 4 and slides 9 through 12 contain non-emotional content. Slides 5 through 8 contain emotional material, and the seventh slide contained the most emotionally arousing image and description (a picture of surgically repaired legs of a car crash victim). The emotionally arousing slide (slide 7) was remembered no better by the bilateral damage participants than any of the other slides. All other participants notably remembered the seventh slide the best and in most detail out of all the other slides. This shows that the amygdala is necessary to facilitate encoding of declarative knowledge regarding emotionally arousing stimuli, but is not required for encoding knowledge of emotionally neutral stimuli. Affecting factors Stress Stress may have an effect on the recall of declarative memories. Lupien, et al. completed a study that had 3 phases for participants to take part in. Phase 1 involved memorizing a series of words, phase 2 entailed either a stressful (public speaking) or non-stressful situation (an attention task), and phase 3 required participants to recall the words they learned in phase 1. There were signs of decreased declarative memory performance in the participants that had to complete the stressful situation after learning the words. Recall performance after the stressful situation was found to be worse overall than after the non-stressful situation. It was also found that performance differed based on whether the participant responded to the stressful situation with an increase in measured levels of salivary cortisol. Posttraumatic stress disorder (PTSD) emerges after exposure to a traumatic event eliciting fear, horror or helplessness that involves bodily injury, the threat of injury, or death to one's self or another person. The chronic stress in PTSD contributes to an observed decrease in hippocampal volume and declarative memory deficits. Stress can alter memory functions, reward, immune function, metabolism and susceptibility to different diseases. Disease risk is particularly pertinent to mental illnesses, whereby chronic or severe stress remains a common risk factor for several mental illnesses. One system suggests there are five types of stress labeled acute time-limited stressors, brief naturalistic stressors, stressful event sequences, chronic stressors, and distant stressors. An acute time-limited stressor involves a short-term challenge, while a brief natural stressor involves an event that is normal but nevertheless challenging. A stressful event sequence is a stressor that occurs, and then continues to yield stress into the immediate future. A chronic stressor involves exposure to a long-term stressor, and a distant stressor is a stressor that is not immediate. Neurochemical factors of stress on the brain Cortisol is the primary glucocorticoid in the human body. In the brain, it modulates the ability of the hippocampus and prefrontal cortex to process memories. Although the exact molecular mechanism of how glucocorticoids influence memory formation is unknown, the presence of glucocorticoid receptors in the hippocampus and prefrontal cortex tell us these structures are some of its many targets. It has been demonstrated that cortisone, a glucocorticoid, impaired blood flow in the right parahippocampal gyrus, left visual cortex and cerebellum. A study by Damoiseaux et al. (2007) evaluated the effects of glucocorticoids on hippocampal and prefrontal cortex activation during declarative memory retrieval. They found that administration of hydrocortisone (name given to cortisol when it is used as a medication) to participants one hour before retrieval of information impairs free recall of words, yet when administered before or after learning they had no effect on recall. They also found that hydrocortisone decreases brain activity in the above-mentioned areas during declarative memory retrieval. Therefore, naturally occurring elevations of cortisol during periods of stress lead to impairment of declarative memory. It is important to note that this study involved only male subjects, which may be significant as sex steroid hormones may have different effects in response to cortisol administration. Men and women also respond to emotional stimuli differently and this may affect cortisol levels. This was also the first Functional magnetic resonance imaging(fMRI) study done utilising glucocorticoids, therefore more research is necessary to further substantiate these findings. Consolidation during sleep It is believed that sleep plays an active role in consolidation of declarative memory. Specifically, sleep's unique properties enhance memory consolidation, such as the reactivation of newly learned memories during sleep. For example, it has been suggested that the central mechanism for consolidation of declarative memory during sleep is the reactivation of hippocampal memory representations. This reactivation transfers information to neocortical networks where it is integrated into long-term representations. Studies on rats involving maze learning found that hippocampal neuronal assemblies that are used in the encoding of spatial information are reactivated in the same temporal order. Similarly, positron emission tomography (PET) has shown reactivation of the hippocampus in slow-wave sleep (SWS) after spatial learning. Together these studies show that newly learned memories are reactivated during sleep and through this process new memory traces are consolidated. In addition, researchers have identified three types of sleep (SWS, sleep spindle and REM) in which declarative memory is consolidated. Slow-wave sleep, often referred to as deep sleep, plays the most important role in consolidation of declarative memory and there is a large amount of evidence to support this claim. One study found that the first 3.5 hours of sleep offer the greatest performance enhancement on memory recall tasks because the first couple of hours are dominated by SWS. Additional hours of sleep do not add to the initial level of performance. Thus this study suggests that full sleep may not be important for optimal performance of memory. Another study shows that people who experience SWS during the first half of their sleep cycle compared to subjects who did not, showed better recall of information. However this is not the case for subjects who were tested for the second half of their sleep cycle, as they experience less SWS. Another key piece of evidence regarding SWS's involvement in declarative memory consolidation is a finding that people with pathological conditions of sleep, such as insomnia, exhibit both reduction in Slow-Wave Sleep and also have impaired consolidation of declarative memory during sleep. Another study found that middle aged people compared to young group had a worse retrieval of memories. This in turn indicated that SWS is associated with poor declarative memory consolidation but not with age itself. Some researchers suggest that sleep spindle, a burst of brain activity occurring during stage 2 sleep, plays a role in boosting consolidation of declarative memories. Critics point out that spindle activity is positively correlated with intelligence. In contrast, Schabus and Gruber point out that sleep spindle activity only relates to performance on newly learned memories and not to absolute performance. This supports the hypothesis that sleep spindle helps to consolidate recent memory traces but not memory performance in general. The relationship between sleep spindles and declarative memory consolidation is not yet fully understood. There is a relatively small body of evidence that supports the idea that REM sleep helps consolidate highly emotional declarative memories. For instance Wagner, et al. compared memory retention for emotional versus neutral text over two instances; early sleep that is dominated by SWS and late sleep that is dominated by REM phase. This study found that sleep improved memory retention of emotional text only during late sleep phase, which was primarily REM. Similarly, Hu & Stylos-Allen, et al. performed a study with emotional versus neutral pictures and concluded that REM sleep facilitates consolidation of emotional declarative memories. The view that sleep plays an active role in declarative memory consolidation is not shared by all researchers. For instance Ellenbogen, et al. argue that sleep actively protects declarative memory from associative interference. Furthermore, Wixted believes that the sole role of sleep in declarative memory consolidation is nothing more but creating ideal conditions for memory consolidation. For example, when awake, people are bombarded with mental activity which interferes with effective consolidation. However, during sleep, when interference is minimal, memories can be consolidated without associative interference. More research is needed to make a definite statement whether sleep creates favourable conditions for consolidation or it actively enhances declarative memory consolidation. Encoding and retrieval The encoding of explicit memory depends on conceptually driven, top-down processing, in which a subject reorganizes the data to store it. The subject makes associations with previously related stimuli or experiences. This was termed deep encoding by Fergus Craik and Robert Lockhart. This way a memory persists longer and will be remembered well. The later recall of information is thus greatly influenced by the way in which the information was originally processed. The depth-of-processing effect is the improvement in subsequent recall of an object about which a person has given thought to its meaning or shape. Simply put: To create explicit memories, you have to do something with your experiences: think about them, talk about them, write them down, study them, etc. The more you do, the better you will remember. Testing of information while learning has also shown to improve encoding in explicit memory. If a student reads a text book and then tests themselves afterward, their semantic memory of what was read is improved. This study – test method improves encoding of information. This Phenomenon is referred to as the Testing Effect. Retrieval: Because a person has played an active role in processing explicit information, the internal cues that were used in processing it can also be used to initiate spontaneous recall. When someone talks about an experience, the words they use will help when they try to remember this experience at a later date. The conditions in which information is memorized can affect recall. If a person has the same surroundings or cues when the original information is presented, they are more likely to remember it. This is referred to as encoding specificity and it also applies to explicit memory. In a study where subjects were asked to perform a cued recall task participants with a high working memory did better than participants with a low working memory when the conditions were maintained. When the conditions were changed for recall both groups dropped. The subjects with higher working memory declined more. This is thought to happen because matching environments activates areas of the brain known as the left inferior frontal gyrus and the hippocampus. Neural structures involved Several neural structures are proposed to be involved in explicit memory. Most are in the temporal lobe or closely related to it, such as the amygdala, the hippocampus, the rhinal cortex in the temporal lobe, and the prefrontal cortex. Nuclei in the thalamus also are included, because many connections between the prefrontal cortex and temporal cortex are made through the thalamus. The regions that make up the explicit memory circuit receive input from the neocortex and from brainstem systems, including acetylcholine, serotonin, and noradrenaline systems. Traumatic brain injury While the human brain is certainly regarded for its plasticity, there is some evidence that shows traumatic brain injury (TBI) in young children can have negative effects on explicit memory. Researchers have looked at children with TBI in early childhood (i.e. infancy) and late childhood. Findings showed that children with severe TBI in late childhood experienced impaired explicit memory while still maintaining implicit memory formation. Researchers also found that children with severe TBI in early childhood had both increased chance of having both impaired explicit memory and implicit memory. While children with severe TBI are at risk for impaired explicit memory, the chances of impaired explicit memory in adults with severe TBI is much greater. Memory loss Alzheimer's disease has a profound effect on explicit memory. Mild cognitive impairment is an early sign of Alzheimer's disease. People with memory conditions often receive cognitive training. When an fMRI was used to view brain activity after training, it found increased activation in various neural systems that are involved with explicit memory. People with Alzheimer's have problems learning new tasks. However, if the task is presented repeatedly they can learn and retain some new knowledge of the task. This effect is more apparent if the information is familiar. The person with Alzheimer's must also be guided through the task and prevented from making errors. Alzheimer's also has an effect on explicit spatial memory. This means that people with Alzheimer's have difficulty remembering where items are placed in unfamiliar environments. The hippocampus has been shown to become active in semantic and episodic memory. The effects of Alzheimer's disease are seen in the episodic part of explicit memory. This can lead to problems with communication. A study was conducted where Alzheimer's patients were asked to name a variety of objects from different periods. The results shown that their ability to name the object depended on frequency of use of the item and when the item was first acquired. This effect on semantic memory also has an effect on music and tones. Alzheimer's patients have difficulty distinguishing between different melodies they have never heard before. People with Alzheimer's also have issues with picturing future events. This is due to a deficit in episodic future thinking. There are many other reasons why adults and others may begin to have memory loss. In popular culture Amnesia is frequently portrayed in television and movies. Some of the better-known examples include: In the romantic comedy 50 First Dates (2004), Adam Sandler plays veterinarian Henry Roth, who falls for Lucy Whitmore, played by Drew Barrymore. Having lost her short-term memory in a car crash, Lucy can only remember the current day's events until she falls asleep. When she wakes up the next morning, she has no recollection of the previous day's experiences. Those experiences would normally be transferred into declarative knowledge and allow them to be recalled in the future. The movie is not the most accurate representation of a true amnesic patient, but it is useful to inform viewers of the detrimental effects of amnesia. Memento (2000) a film inspired by the case of Henry Molaison (H.M.). Guy Pearce plays a former insurance investigator suffering from severe anterograde amnesia, which was caused by a head injury. Unlike most other amnesiacs, Leonard retains his identity and the memories of events that occurred before the injury but has lost all ability to form new memories. That loss of ability indicates that the head injury affected the medial temporal lobe of the brain, which has resulted in his inability to form declarative memory. Finding Nemo features a reef fish named Dory with an inability to develop declarative memory. That prevents her from learning or retaining any new information such as names or directions. The exact origin of Dory's impairment is not mentioned in the film, but her memory loss accurately portrays the difficulties facing amnesiacs. See also Gollin figure test References Memory
0.760413
0.991191
0.753715
Adaptive software development
Adaptive software development (ASD) is a software development process that grew out of the work by Jim Highsmith and Sam Bayer on rapid application development (RAD). It embodies the principle that continuous adaptation of the process to the work at hand is the normal state of affairs. Adaptive software development replaces the traditional waterfall cycle with a repeating series of speculate, collaborate, and learn cycles. This dynamic cycle provides for continuous learning and adaptation to the emergent state of the project. The characteristics of an ASD life cycle are that it is mission focused, feature based, iterative, timeboxed, risk driven, and change tolerant. As with RAD, ASD is also an antecedent to agile software development. The word speculate refers to the paradox of planning – it is more likely to assume that all stakeholders are comparably wrong for certain aspects of the project’s mission, while trying to define it. During speculation, the project is initiated and adaptive cycle planning is conducted. Adaptive cycle planning uses project initiation information—the customer’s mission statement, project constraints (e.g., delivery dates or user descriptions), and basic requirements—to define the set of release cycles (software increments) that will be required for the project. Collaboration refers to the efforts for balancing the work based on predictable parts of the environment (planning and guiding them) and adapting to the uncertain surrounding mix of changes caused by various factors, such as technology, requirements, stakeholders, software vendors. The learning cycles, challenging all stakeholders, are based on the short iterations with design, build and testing. During these iterations the knowledge is gathered by making small mistakes based on false assumptions and correcting those mistakes, thus leading to greater experience and eventually mastery in the problem domain. References Adaptive Software Development: A Collaborative Approach to Managing Complex Systems, Highsmith, J.A., 2000 New York: Dorset House, 392pp, Agile Project Management: Creating Innovative Products, Addison-Wesley, Jim Highsmith, March 2004, 277pp, Software Engineering: A Practitioner's Approach, Roger Pressman, Bruce Maxim. Software development process Agile software development
0.771391
0.977065
0.753699
Heteronomy
Heteronomy refers to action that is influenced by a force outside the individual, in other words the state or condition of being ruled, governed, or under the sway of another, as in a military occupation. Immanuel Kant, drawing on Jean-Jacques Rousseau, considered such an action nonmoral. It is the counter/opposite of autonomy. Philosopher Cornelius Castoriadis contrasted heteronomy with autonomy by noting that while all societies create their own institutions (laws, traditions and behaviors), autonomous societies are those in which their members are aware of this fact, and explicitly self-institute (αυτο-νομούνται). In contrast, the members of heteronomous societies (hetero = others) attribute their imaginaries to some extra-social authority (e.g., God, the state, ancestors, historical necessity, etc.). See also Autonomy and heteronomy (linguistics) Social alienation Marx's theory of alienation References Further reading Concepts in ethics
0.772384
0.975784
0.753681
Education and technology
The relationship between education and technology has emerged as a pivotal aspect of contemporary development, propelled by rapid expansion. internet connectivity and mobile penetration. Our world is now interconnected, with approximately 40% of the global population using the internet, a figure that continues to rise at an astonishing pace. While internet connectivity varies across countries and regions, the prevalence of households with internet access global South has surpassed that in the global North. Additionally, over 70% of mobile telephone subscriptions worldwide are now found in the global South. It is projected that within the next twenty years, five billion people will transition from having no connectivity to enjoying full access. Such technologies have expanded opportunities for freedom of expression and social, civic, and political mobilization, but they also raise important concerns. The availability of personal information in the cyber world, for example, raises significant issues of privacy and security. New spaces for communication and socialization are transforming the concept of 'social' and necessitate enforceable legal and other safeguards to prevent their overuse, abuse, and misuse. Examples of such misuse of the internet, mobile technology and social media range from cyber-bullying to criminal activities, including terrorism. In this new cyber world, educators need to better prepare new generations 'digital natives' to navigate the ethical and social dimensions of not only existing digital technologies but also those yet to be invented. Education and technology in developing countries The role of educational technology in enhancing access to education, particularly in impoverished areas and developing countries, is increasingly significant. However, it is important to recognise that educational technology is not solely about the integration of education and technology; it is also influenced by the societal culture in which it is implemented. Various organizations, including charities like One Laptop per Child, are dedicated to providing infrastructures that enable disadvantaged individuals to access educational materials. The OLPC foundation, supported by major corporations and originating from MIT Media Lab, has a mission to develop a $100 laptop for delivering educational software. These laptops have been made widely available since 2008, either sold at cost or distributed through donations. In developing countries, technology adoption may be limited, but some countries have made progress in implementing pro-technology policies and advancements in biotechnology. One positive outcome of improved technology in these countries is reduced dependence on developed nations. Strategies such as developing infrastructure, promoting entrepreneurship, and formulating open policies towards technology can be effective in enhancing education and economies in developing nations. In Africa, the New Partnership for Africa's Development (NEPAD) has launched an " e-school program" with the ambitious goal of providing computer equipment, learning materials, and internet access to all 600,000 primary and high schools within a decade. Another notable initiative, nabuur.com, supported by former US President Bill Clinton, utilises the internet to facilitate cooperation among individuals on social development issues. India is also making advancements in educational technology by implementing initiatives that deliver learning materials directly to students. In 2004, the Indian Space Research Organisation launched EDUSAT, a communications satellite that provides cost-effective access to educational materials, reaching a larger portion of the country's population. Educational tech (EdTech), encompasses information and communication technology (ICT) and has the potential to address various challenges, such as the absence of teachers, by providing improved lessons, teacher training, and student motivation. In recent years, the cost of educational technology has significantly decreased, making it more accessible even in economically disadvantaged countries. Tablets, for example, can now be purchased for as low as $28, and India offers the most affordable data plans worldwide. This affordability has given rise to new ventures like ExtraClass, which aims to provide affordable education to 260 million children. Effects of Technology on Education The role of innovation in education is crucial for ensuring equal access to essential tools that can have a significant impact on the lives of both educators and students. To develop effective strategies that cater to the specific needs of a developing society, several important themes can be identified. One such theme is the necessity to provide students with access to appropriate learning materials, particularly in their native languages, as this facilitates better comprehension of subjects. In this context, it is essential for education to adopt a humanistic approach, particularly in light of the increasing prominence of digital technologies. An example of the application of innovative technology in education is the implementation of an AI-based tutoring system at an entry-level IT school in Pensacola by the U.S. Navy. This system incorporates a human tutor who closely monitors the progress of the students and provides individual assessments. According to the Navy, students who utilised the digital tutoring system consistently achieved higher test scores compared to those who did not use the digital tutor. The adaptive nature of the technology appears to have a positive impact on students, as it can assist individuals with diverse learning styles and better equip them to learn independently. Controversy Technologies are being developed to address different challenges in topics such as education, health and global poverty, but there are cases in which this is not working or the results achieved are far away from the expectations. Kentaro Toyama, in his book Geek Heresy mentions examples in which this happen. He highlights the cases of computers in Bangalore that are locked away because teachers don't know what to do with them and mobile phone apps meant to spread hygiene practices and fail to improve health in Africa. Moreover, these past decades there have been huge improvements in technology which have done little to reduce rising poverty and inequalities, even in developed countries like United States. In addition to this, an interesting example is the one found by the economist Ana Santiago and her colleagues at the Inter-American Development Bank which conclude no educational advantage in a One Laptop per Child program in Peru. Another team of researchers found similar results in Uruguay, and concluded: "Our findings confirm that the technology alone cannot impact learning". References Free content from UNESCO Educational technology
0.775335
0.972006
0.75363
Vroom–Yetton decision model
The Vroom–Yetton contingency model is a situational leadership theory of industrial and organizational psychology developed by Victor Vroom, in collaboration with Philip Yetton (1973) and later with Arthur Jago (1988). The situational theory argues the best style of leadership is contingent to the situation. This model suggests the selection of a leadership style of groups decision-making. The Vroom-Yetton-Jago Normative Decision Model helps to answer above questions. This model identifies five different styles (ranging from autocratic to consultative to group-based decisions) on the situation and level of involvement. They are: Autocratic Type 1 (AI) Leader makes own decision using information that is readily available to him or her at the time. This type is completely autocratic. Autocratic Type 2 (AII) Leader collects required information from followers, then makes decision alone. Problem or decision may or may not be informed to followers. Here, followers' involvement is just providing information. Consultative Type 1 (CI) Leader shares problem to relevant followers individually and seeks their ideas and suggestions and makes decision alone. Here followers do not meet each other and the leader’s decision may or may not reflect his followers' influence. So, here followers' involvement is at the level of providing alternatives individually. Consultative Type 2 (CII) Leader shares problem to relevant followers as a group and seeks their ideas and suggestions and makes decision alone. Here followers meet each other, and through discussions they understand other alternatives. But the leader’s decision may or may not reflect the followers' influence. So, here followers involvement is at the level of helping as a group in decision-making. Group-based Type 2 (GII) Leader discuss problem and situation with followers as a group and seeks their ideas and suggestions through brainstorming. Leader accepts any decision and does not try to force his or her idea. Decision accepted by the group is the final one. Vroom and Yetton formulated following seven questions on decision quality, commitment, problem information and decision acceptance, with which leaders can determine level of followers involvement in decision. Answer to the following questions must be either ‘Yes’ or ‘No’ with the current scenario: Is there a quality requirement? Is the nature of the solution critical? Are there technical or rational grounds for selecting among possible solutions? Do I have sufficient information to make a high quality decision? Is the problem structured? Are the alternative courses of action and methods for their evaluation known? Is acceptance of the decision by subordinates critical to its implementation? If I were to make the decision by myself, is it reasonably certain that it would be accepted by my subordinates? Do subordinates share the organizational goals to be obtained in solving this problem? Is conflict among subordinates likely in obtaining the preferred solution? Based on the answers one can find out the styles from the graph. See also Leadership References External links Vroom-Yetton-Jago Normative Decision Model This is a simple explanation of the model along with the key criteria used for determining how much a manager should involve others in a decision making process. Leadership
0.769668
0.979063
0.753553
Positionality statement
A positionality statement, also called reflexivity statement or identity statement, is a statement wherein a person (such as a researcher or teacher) reports and discusses their group identities, such as in a grant proposal or journal submission. They have become commonplace in certain fields of social science, especially within the United States. Positionality statements focus on an "author's racial, gender, class, or other self-identifications, experiences, and privileges", based on the idea that the author's identity can, intentionally or not, influence the results of their research. Scholars have commonly identified this risk in cases where the researcher is the sole point of connection between the audience and research subjects and, relatedly, when there exists a known power imbalance between the researcher and the research subject. The expectation and/or practice of writing a positionality statement can also inform the researcher of ways to mitigate the influence of their personal identity on the research by clarifying such interactions before the data collection or analysis process concludes. Criticism Positionality statements have also attracted controversy, being alternatively labeled by detractors as "research segregation", "positional piety", and "loyalty oaths". According to critics, an author may claim moral authority through affinity with subjects, or through a confession of difference of relative privilege. This has given rise to the concern that positionality statements can lead to "positional piety", where researchers are considered more or less credible based on race, gender, or other characteristics. On the other hand, supporters of positionality statements point out that such criticisms often stem from "bad" positionality statements, and instead argue for a comprehensive standard of quality. In Education Positionality statements have increased in popularity during the 2000s, required not just of researchers, but also students. A challenge has been the phenomenon of "phony positionality", wherein students learn to voice the beliefs expected in positionality statements without actually believing them. This "performative" positionality has been an obstacle to their adoption in the classroom. See also Perspectivism Postmodernism Social constructionism Standpoint theory Subjectivity References Social constructionism Intersectionality Identity politics
0.769096
0.979766
0.753534
Familialism
Familialism or familism is a philosophy that puts priority to family. The term familialism has been specifically used for advocating a welfare system wherein it is presumed that families will take responsibility for the care of their members rather than leaving that responsibility to the government. The term familism relates more to family values. This can manifest as prioritizing the needs of the family higher than that of individuals. Yet, the two terms are often used interchangeably. In the Western world, familialism views the nuclear family of one father, one mother, and their child or children as the central and primary social unit of human ordering and the principal unit of a functioning society and civilization. In Asia, aged parents living with the family is often viewed as traditional. It is suggested that Asian familialism became more fixed after encounters with Europeans following the Age of Discovery. In Japan, drafts based on French laws were rejected after criticism from people like by the reason that "civil law will destroy filial piety". Regarding familism as a fertility factor, there is limited support among Hispanics of an increased number of children with increased familism in the sense of prioritizing the needs of the family higher than that of individuals. On the other hand, the fertility impact is unknown in regard to systems where the majority of the economic and caring responsibilities rest on the family (such as in Southern Europe), as opposed to defamilialized systems where welfare and caring responsibilities are largely supported by the state (such as Nordic countries). Western familism In the Western world, familialism views the nuclear family of one father, one mother, and their child or children as the central and primary social unit of human ordering and the principal unit of a functioning society and civilization. Accordingly, this unit is also the basis of a multi-generational extended family, which is embedded in socially as well as genetically inter-related communities, nations, etc., and ultimately in the whole human family past, present and future. As such, Western familialism usually opposes other social forms and models that are chosen as alternatives (i.e. single-parent, LGBT parenting, etc.). Historical and philosophical background of Western familism Ancient political familialism "Family as a model for the state" as an idea in political philosophy originated in the Socratic-Platonic principle of macrocosm/microcosm, which identifies recurrent patterns at larger and smaller scales of the cosmos, including the social world. In particular, monarchists have argued that the state mirrors the patriarchal family, with the subjects obeying the king as children obey their father, which in turn helps to justify monarchical or aristocratic rule. Plutarch (46–120 CE) records a laconic saying of the Dorians attributed to Lycurgus (8th century BCE). Asked why he did not establish a democracy in Lacedaemon (Sparta), Lycurgus responded, "Begin, friend, and set it up in your family". Plutarch claims that Spartan government resembled the family in its form. Aristotle (384–322 BCE) argued that the schema of authority and subordination exists in the whole of nature. He gave examples such as man and animal (domestic), man and wife, slaves and children. Further, he claimed that it is found in any animal, as the relationship he believed to exist between soul and body, of "which the former is by nature the ruling and the later subject factor". Aristotle further asserted that "the government of a household is a monarchy since every house is governed by a single ruler". Later, he said that husbands exercise a republican government over their wives and monarchical government over their children, and that they exhibit political office over slaves and royal office over the family in general. Arius Didymus (1st century CE), cited centuries later by Stobaeus, wrote that "A primary kind of association (politeia) is the legal union of a man and woman for begetting children and for sharing life". From the collection of households a village is formed and from villages a city, "So just as the household yields for the city the seeds of its formation, thus it yields the constitution (politeia)". Further, Didymus claims that "Connected with the house is a pattern of monarchy, of aristocracy and of democracy. The relationship of parents to children is monarchic, of husbands to wives aristocratic, of children to one another democratic". Modern political familialism The family is in the center of the social philosophy of the early Chicago School of Economics. It is a recurring point of reference in the economic and social theories of its founder Frank Knight. Knight positions his notion of the family in contrast to the dominant notion of individualism: "Our 'individualism' is really 'familism'. ... The family is still the unit in production and consumption." Some modern thinkers, such as Louis de Bonald, have written as if the family were a miniature state. In his analysis of the family relationships of father, mother and child, Bonald related these to the functions of a state: the father is the power, the mother is the minister and the child as subject. As the father is "active and strong" and the child is "passive or weak", the mother is the "median term between the two extremes of this continuous proportion". Like many apologists for political familialism, De Bonald justified his analysis on biblical authority: "(It) calls man the reason, the head, the power of woman: Vir caput est mulieris (the man is head of the woman) says St. Paul. It calls woman the helper or minister of man: "Let us make man," says Genesis, "a helper similar to him." It calls the child a subject, since it tells it, in a thousand places, to obey its parents". Bonald also sees divorce as the first stage of disorder in the state, insisting that the deconstitution of the family brings about the deconstitution of state, with The Kyklos not far behind. Erik von Kuehnelt-Leddihn also connects family and monarchy: "Due to its inherent patriarchalism, monarchy fits organically into the ecclesiastic and familistic pattern of a Christian society. (Compare the teaching of Pope Leo XIII: 'Likewise the powers of fathers of families preserves expressly a certain image and form of the authority which is in God, from which all paternity in heaven and earth receives its name—Eph 3.15') The relationship between the King as 'father of the fatherland' and the people is one of mutual love". George Lakoff has more recently claimed that the left-right distinction in politics reflects a different ideals of the family; for the right-wing, the ideal is a patriarchal family based upon absolutist morality; for the left-wing, the ideal is an unconditionally loving family. As a result, Lakoff argues, both sides find each other's views not only immoral, but incomprehensible, since they appear to violate each side's deeply held beliefs about personal morality in the sphere of the family. Criticism of Western familism Criticism in practice Familialism has been challenged as historically and sociologically inadequate to describe the complexity of actual family relations. In modern American society in which the male head of the household can no longer be guaranteed a wage suitable to support a family, 1950s-style familialism has been criticized as counterproductive to family formation and fertility. Imposition of Western-style familialism on other cultures has been disruptive to traditional non-nuclear family forms such as matrilineality. The rhetoric of "family values" has been used to demonize single mothers and LGBT couples, who allegedly lack them. This has a disproportionate impact on the African-American community, as African-American women are more likely to be single mothers. Criticism from the LGBT community LGBT communities tend to accept and support the diversity of intimate human associations, partially as a result of their historically ostracized status from nuclear family structures. From its inception in the late 1960s, the gay rights movement has asserted every individual's right to create and define their own relationships and family in the way most conducive to the safety, happiness, and self-actualization of each individual. For example, the glossary of LGBT terms of Family Pride Canada, a Canadian organization advocating for family equality for LGBT parents, defines familialism as: Criticism in psychology Normalization of the nuclear family as the only healthy environment for children has been criticized by psychologists. In a peer-reviewed study from 2007, adoptees have been shown to display self-esteem comparable with non-adoptees. In a meta-study from 2012, "quality of parenting and parent–child relationships" is described as the most important factor to children development. Also "Dimensions of family structure including such factors as divorce, single parenthood, and the parents' sexual orientation and biological relatedness between parents and children are of little or no predictive importance" Criticism in psychoanalysis Gilles Deleuze and Félix Guattari, in their now-classic 1972 book Anti-Oedipus, argued that psychiatry and psychoanalysis, since their inception, have been affected by an incurable familialism, which is their ordinary bed and board. Psychoanalysis has never escaped from this, having remained captive to an unrepentant familialism. Michel Foucault wrote that through familialism psychoanalysis completed and perfected what the psychiatry of 19th century insane asylums had set out to do and that it enforced the power structures of bourgeois society and its values: Family-Children (paternal authority), Fault-Punishment (immediate justice), Madness-Disorder (social and moral order). Deleuze and Guattari added that "the familialism inherent in psychoanalysis doesn't so much destroy classical psychiatry as shine forth as the latter's crowning achievement", and that since the 19th century, the study of mental illnesses and madness has remained the prisoner of the familial postulate and its correlates. Through familialism, and the psychoanalysis based on it, guilt is inscribed upon the family's smallest member, the child, and parental authority is absolved. According to Deleuze and Guattari, among the psychiatrists only Karl Jaspers and Ronald Laing, have escaped familialism. This was not the case of the culturalist psychoanalysts, which, despite their conflict with orthodox psychoanalysts, had a "stubborn maintenance of a familialist perspective", still speaking "the same language of a familialized social realm". Criticism in Marxism In The Communist Manifesto of 1848, Karl Marx describes how the bourgeois or monogamous two-parent family has as its foundation capital and private gain. Marx also pointed out that this family existed only in its full form among the bourgeoisie or upper classes, and was nearly absent among the exploited proletariat or working class. He felt that the vanishment of capital would also result in the vanishment of the monogamous marriage, and the exploitation of the working class. He explains how family ties among the proletarians are divided by the capitalist system, and their children are used simply as instruments of labour. This is partly due to child labour laws being less strict at the time in Western society. In Marx's view, the bourgeois husband sees his wife as an instrument of labour, and therefore to be exploited, as instruments of production (or labour) exist under capitalism for this purpose. In The Origin of the Family, Private Property, and the State, published in 1884, Frederick Engels was also extremely critical of the monogamous two parent family and viewed it as one of many institutions for the division of labour in capitalist society. In his chapter "The Monogamous Family", Engels traces monogamous marriage back to the Greeks, who viewed the practice's sole aim as making "the man supreme in the family, and to propagate, as the future heirs to his wealth, children indisputably his own". He felt that the monogamous marriage made explicit the subjugation of one sex by the other throughout history, and that the first division of labour "is that between man and woman for the propagation of children". Engels views the monogamous two-parent family as a microcosm of society, stating "It is the cellular form of civilized society, in which the nature of the oppositions and contradictions fully active in that society can be already studied". Engels pointed out disparities between the legal recognition of a marriage, and the reality of it. A legal marriage is entered into freely by both partners, and the law states both partners must have common ground in rights and duties. There are other factors that the bureaucratic legal system cannot take into account however, since it is "not the law's business". These may include differences in the class position of both parties and pressure on them from outside to bear children. For Engels, the obligation of the husband in the traditional two-parent familial structure is to earn a living and support his family. This gives him a position of supremacy. This role is given without a particular need for special legal titles or privileges. Within the family, he represents the bourgeois, and the wife represents the proletariat. Engels, on the other hand, equates the position of the wife in marriage with one of exploitation and prostitution, as she sells her body "once and for all into slavery". More recent criticism from a Marxist perspective comes from Lisa Healy in her 2009 essay "Capitalism and the Transforming Family Unit: A Marxist Analysis". Her essay examines the single-parent family, defining it as one parent, often a woman, living with one or more usually unmarried children. The stigmatization of lone parents is tied to their low rate of participation in the workforce, and a pattern of dependency on welfare. This results in less significant contributions to the capitalist system on their part. This stigmatization is reinforced by the state, such as through insufficient welfare payments. This exposes capitalist interests that are inherent to their society and which favour two-parent families. In politics Australia The Family First Party originally contested the 2002 South Australian state election, where former Assemblies of God pastor Andrew Evans won one of the eleven seats in the 22-seat South Australian Legislative Council on 4 percent of the statewide vote. The party made their federal debut at the 2004 general election, electing Steve Fielding on 2 percent of the Victorian vote in the Australian Senate, out of six Victorian senate seats up for election. Both MPs were able to be elected with Australia's Single Transferable Vote and Group voting ticket system in the upper house. The party opposes abortion, euthanasia, harm reduction, gay adoptions, in-vitro fertilisation (IVF) for gay couples and gay civil unions. It supports drug prevention, zero tolerance for law breaking, rehabilitation, and avoidance of all sexual behaviors it considers deviant. In the 2007 Australian election, Family First came under fire for giving preferences in some areas to the Liberty and Democracy Party, a libertarian party that supports legalization of incest, gay marriage, and drug use. United Kingdom Family values was a recurrent theme in the Conservative government of John Major. His Back to Basics initiative became the subject of ridicule after the party was affected by a series of sleaze scandals. John Major himself, the architect of the policy, was subsequently found to have had an affair with Edwina Currie. Family values were revived under David Cameron, being a recurring theme in his speeches on social responsibility and related policies, demonstrated by his Marriage Tax allowance policy which would provide tax breaks for married couples. New Zealand Family values politics reached their apex under the social conservative administration of the Third National Government (1975–84), widely criticised for its populist and social conservative views about abortion and homosexuality. Under the Fourth Labour Government (1984–90), homosexuality was decriminalised and abortion access became easier to obtain. In the early 1990s, New Zealand reformed its electoral system, replacing the first-past-the-post electoral system with the Mixed Member Proportional system. This provided a particular impetus to the formation of separatist conservative Christian political parties, disgruntled at the Fourth National Government (1990–99), which seemed to embrace bipartisan social liberalism to offset Labour's earlier appeal to social liberal voters. Such parties tried to recruit conservative Christian voters to blunt social liberal legislative reforms, but had meagre success in doing so. During the tenure of Fifth Labour Government (1999–2008), prostitution law reform (2003), same-sex civil unions (2005) and the repeal of laws that permitted parental corporal punishment of children (2007) became law. At present, Family First New Zealand, a 'non-partisan' social conservative lobby group, operates to try to forestall further legislative reforms such as same-sex marriage and same-sex adoption. In 2005, conservative Christians tried to pre-emptively ban same-sex marriage in New Zealand through alterations to the New Zealand Bill of Rights Act 1990, but the bill failed 47 votes to 73 at its first reading. At most, the only durable success such organisations can claim in New Zealand is the continuing criminality of cannabis possession and use under New Zealand's Misuse of Drugs Act 1975. Russia Federal law of Russian Federation no. 436-FZ of 2010-12-23 "On Protecting Children from Information Harmful to Their Health and Development" lists information "negating family values and forming disrespect to parents and/or other family members" as information not suitable for children ("18+" rating). It does not contain any separate definition of family values. Singapore Singapore's main political party, the People's Action Party, promotes family values intensively. Former Prime Minister Lee Hsien Loong said that "The family is the basic building block of our society. [...] And by "family" in Singapore, we mean one man, one woman, marrying, having children and bringing up children within that framework of a stable family unit." One MP has described the nature of family values in the city-state as "almost Victorian in nature". The government is opposed to same-sex adoption. The Singaporean justice system uses corporal punishment. United States The use of family values as a political term dates back to 1976, when it appeared in the Republican Party platform. The phrase became more widespread after Vice President Dan Quayle used it in a speech at the 1992 Republican National Convention. Quayle had also launched a national controversy when he criticized the television program Murphy Brown for a story line that depicted the title character becoming a single mother by choice, citing it as an example of how popular culture contributes to a "poverty of values", and saying: "[i]t doesn't help matters when primetime TV has Murphy Brown—a character who supposedly epitomizes today's intelligent, highly paid, professional woman—mocking the importance of fathers, by bearing a child alone, and calling it just another 'lifestyle choice'". Quayle's remarks initiated widespread controversy, and have had a continuing effect on U.S. politics. Stephanie Coontz, a professor of family history and the author of several books and essays about the history of marriage, says that this brief remark by Quayle about Murphy Brown "kicked off more than a decade of outcries against the 'collapse of the family'". In 1998, a Harris survey found that: 52% of women and 42% of men thought family values means "loving, taking care of, and supporting each other" 38% of women and 35% of men thought family values means "knowing right from wrong and having good values" 2% of women and 1% men thought of family values in terms of the "traditional family" The survey noted that 93% of all women thought that society should value all types of families (Harris did not publish the responses for men). Republican Party Since 1980, the Republican Party has used the issue of family values to attract socially conservative voters. While "family values" remains an amorphous concept, social conservatives usually understand the term to include some combination of the following principles (also referenced in the 2004 Republican Party platform): opposition to sex outside of marriage support for a traditional role for women in "the family" opposition to same-sex marriage, homosexuality and gender transition support for complementarianism opposition to legalized induced abortion support for abstinence-only sex education support for policies said to protect children from obscenity and exploitation Social and religious conservatives often use the term "family values" to promote conservative ideology that supports traditional morality or Christian values. Social conservatism in the United States is centered on the preservation of what adherents often call 'traditional' or 'family values'. Some American conservative Christians see their religion as the source of morality and consider the nuclear family an essential element in society. For example, "The American Family Association exists to motivate and equip citizens to change the culture to reflect Biblical truth and traditional family values." Such groups variously oppose abortion, pornography, masturbation, pre-marital sex, polygamy, homosexuality, certain aspects of feminism, cohabitation, separation of church and state, legalization of recreational drugs, and depictions of sexuality in the media. Democratic Party Although the term "family values" remains a core issue for the Republican Party, the Democratic Party has also used the term, though differing in its definition. In his acceptance speech at the 2004 Democratic National Convention, John Kerry said "it is time for those who talk about family values to start valuing families". Other liberals have used the phrase to support such values as family planning, affordable child-care, and maternity leave. For example, groups such as People For the American Way, Planned Parenthood, and Parents and Friends of Lesbians and Gays have attempted to define the concept in a way that promotes the acceptance of single-parent families, same-sex monogamous relationships and marriage. This understanding of family values does not promote conservative morality, instead focusing on encouraging and supporting alternative family structures, access to contraception and abortion, increasing the minimum wage, sex education, childcare, and parent-friendly employment laws, which provide for maternity leave and leave for medical emergencies involving children. While conservative sexual ethics focus on preventing premarital or non-procreative sex, liberal sexual ethics are typically directed rather towards consent, regardless of whether or not the partners are married. Demographics Population studies have found that in 2004 and 2008, liberal-voting ("blue") states have lower rates of divorce and teenage pregnancy than conservative-voting ("red") states. June Carbone, author of Red Families vs. Blue Families, opines that the driving factor is that people in liberal states tend to wait longer before getting married. A 2002 government survey found that 95% of adult Americans had premarital sex. This number had risen slightly from the 1950s, when it was nearly 90%. The median age of first premarital sex has dropped in that time from 20.4 to 17.6. Christian right The Christian right often promotes the term family values to refer to their version of familialism. Focus on the Family is an American Christian conservative organization whose family values include adoption by married, opposite-sex parents; and traditional gender roles. It opposes abortion, divorce, LGBT rights, particularly LGBT adoption and same-sex marriage, pornography, masturbation, and pre-marital sex. The Family Research Council is an example of a right-wing organization claiming to uphold traditional family values. Due to its usage of virulent anti-gay rhetoric and opposition to civil rights for LGBT people, it was classified as a hate group. See also Nepotism, favoritism granted to relatives and friends without regard to merit Nuclear family, a family group consisting of a pair of adults and their children Natalism, a belief that promotes human reproduction Extended family Single parent Family Coalition Party of British Columbia Family Party of Germany League of Polish Families Nepal Pariwar Dal New Reform Party of Ontario, founded as Family Coalition Party of Ontario Party for Japanese Kokoro The People of Family We Are Family (Slovakia) World Congress of Families References Plutarch: The Lives of the Noble Grecians and Romans, trans. by John Dryden and revised by Arthur Hugh Clough, The Modern Library (div of Random House, Inc). Bio on Lycurgus; pg 65. Politics, Aristotle, Loeb Classical Library, Bk I, §II 8–10; 1254a 20–35; pg 19–21 Politics, Bk I, §11,21;1255b 15–20; pg 29. Hellenistic Commentary to the New Testament, ed. By M. Eugene Boring, Klaus Berger, Carsten Colpe, Abingdon Press, Nashville, TN, 1995. Hellenistic Commentary to the New Testament, ed. By M. Eugene Boring, Klaus Berger, Carsten Colpe, Abingdon Press, Nashville, TN, 1995. On Divorce, Louis de Bonald, trans. By Nicholas Davidson, Transaction Publishers, New Brunswick, 1993. pp 44–46. On Divorce, Louis de Bonald, pp 88–89; 149. Liberty or Equality, Von Kuehnelt-Leddihn, pg 155. George Lakoff, What Conservatives Know That Liberals Don't, Frank H. Knight, (1923). The Ethics of Competition. The Quarterly Journal of Economics, 37(4), 579–624. https://doi.org/10.2307/1884053, p. 590f. Noppeney, C. (1998). Zwischen Chicago-Schule und Ordoliberalismus: Wirtschaftsethische Spuren in der Ökonomie Frank Knights (Bd. 21). Bern: Paul Haupt, p. 176ff, Further reading Anne Revillard (2007) Stating Family Values and Women's Rights: Familialism and Feminism Within the French Republic French Politics 5, 210–228. Alberto Alesina; Paola Giuliano (2010) The Power of the Family Journal of Economic Growth, vol. 15(2), 93-125 Frederick Engels (1884) The Monogamous Family The Origin of the Family, Private Property and the State. Chapter 2, Part 4. Retrieved 24 October 2013. Carle C. Zimmerman (1947) Family and Civilization The close and causal connections between the rise and fall of different types of families and the rise and fall of civilizations. Zimmerman traces the evolution of family structure from tribes and clans to extended and large nuclear families to the small nuclear families and broken families of today. Family Ideologies Social ideologies Political ideologies Conservatism Social conservatism Censorship of LGBTQ issues
0.763507
0.986923
0.753522
Participatory rural appraisal
Participatory rural appraisal (PRA) is an approach used by non-governmental organizations (NGOs) and other agencies involved in international development. The approach aims to incorporate the knowledge and opinions of rural people in the planning and management of development projects and programmes. Origins The philosophical roots of participatory rural appraisal techniques can be traced to activist adult education methods such as those of Paulo Freire and the study clubs of the Antigonish Movement. In this view, an actively involved and empowered local population is essential to successful rural community development. Robert Chambers, a key exponent of PRA, argued that the approach owes much to "the Freirian theme, that poor and exploited people can and should be enabled to analyze their own reality." By the early 1980s, there was growing dissatisfaction among development experts with both the reductionism of formal surveys, and the biases of typical field visits. In 1983, Robert Chambers, a Fellow at the Institute of Development Studies (UK), used the term rapid rural appraisal (RRA) to describe techniques that could bring about a "reversal of learning", to learn from rural people directly. Two years later, the first international conference to share experiences relating to RRA was held in Thailand. This was followed by a rapid acceptance of usage of methods that involved rural people in examining their own problems, setting their own goals, and monitoring their own achievements. By the mid-1990s, the term RRA had been replaced by a number of other terms including participatory rural appraisal (PRA) and participatory learning and action (PLA). Robert Chambers acknowledged that the significant breakthroughs and innovations that informed the methodology came from community development practitioners in Africa, India and elsewhere. Chambers helped PRA gain acceptance among practitioners. Chambers explained the function of participatory research in PRA as follows: Overview of techniques Over the years techniques and tools have been described in a variety of books and newsletters, or taught at training courses. However, the field has been criticized for lacking a systematic evidence-based methodology. The basic techniques used include: Understanding group dynamics, e.g. through learning contracts, role reversals, feedback sessions Surveying and sampling, e.g. transect walks, wealth ranking, social mapping Interviewing, e.g. focus group discussions, semi-structured interviews, triangulation Community mapping, e.g. Venn diagrams, matrix scoring, ecograms, timelines To ensure that people are not excluded from participation, these techniques avoid writing wherever possible, relying instead on the tools of oral communication and visual communication such as pictures, symbols, physical objects and group memory. Efforts are made in many projects, however, to build a bridge to formal literacy; for example by teaching people how to sign their names or recognize their signatures. Often developing communities are reluctant to permit invasive audio-visual recording. Developmental changes in PRA Since the early 21st century, some practitioners have replaced PRA with the standardized model of community-based participatory research (CBPR) or with participatory action research (PAR). Social survey techniques have also changed during this period, including greater use of information technology such as fuzzy cognitive maps, e-participation, telepresence, social network analysis, topic models, geographic information systems (GIS), and interactive multimedia..... See also References Further reading Participatory Learning and Action / PLA Notes archive. Started in the 1980s and first known as RRA Notes, then as PLA Notes, and then as Participatory Learning and Action, this archive of articles is a joint collaboration of the International Institute for Environment and Development (IIED) and the Institute of Development Studies (IDS). Participatory democracy Political science education International development Rural community development Group processes
0.762502
0.988209
0.753512
Complexity theory and organizations
Complexity theory and organizations, also called complexity strategy or complex adaptive organizations, is the use of the study of complexity systems in the field of strategic management and organizational studies. It draws from research in the natural sciences that examines uncertainty and non-linearity. Complexity theory emphasizes interactions and the accompanying feedback loops that constantly change systems. While it proposes that systems are unpredictable, they are also constrained by order-generating rules. Complexity theory has been used in the fields of strategic management and organizational studies. Application areas include understanding how organizations or firms adapt to their environments and how they cope with conditions of uncertainty. Organizations have complex structures in that they are dynamic networks of interactions, and their relationships are not aggregations of the individual static entities. They are adaptive; in that, the individual and collective behavior mutate and self-organize corresponding to a change-initiating micro-event or collection of events. Key concepts Complex adaptive systems Organizations can be treated as complex adaptive systems (CAS) as they exhibit fundamental CAS principles like self-organization, complexity, emergence, interdependence, space of possibilities, co-evolution, chaos, and self-similarity. CAS are contrasted with ordered and chaotic systems by the relationship that exists between the system and the agents which act within it. In an ordered system the level of constraint means that all agent behavior is limited to the rules of the system. In a chaotic system, the agents are unconstrained and susceptible to statistical and other analyses. In a CAS, the system and the agents co-evolve; the system lightly constrains agent behavior, but the agents modify the system by their interaction with it. This self-organizing nature is an important characteristic of CAS; and its ability to learn to adapt, differentiate it from other self-organizing systems. Organizational environments can be viewed as complex adaptive systems where coevolution generally occurs near the edge of chaos, and it should maintain a balance between flexibility and stability to avoid organizational failure. As a response to coping with turbulent environments; businesses bring out flexibility, creativity, agility, and innovation near the edge of chaos; provided the organizational structure has sufficient decentralized, non-hierarchical network structures. Implications for organizational management CAS approaches to strategy seek to understand the nature of system constraints and agent interaction and generally takes an evolutionary or naturalistic approach to strategy. Some research integrates computer simulation and organizational studies. Complexity theory and knowledge management Complexity theory also relates to knowledge management (KM) and organizational learning (OL). "Complex systems are, by any other definition, learning organizations." Complexity Theory, KM, and OL are all complementary and co-dependent. “KM and OL each lack a theory of how cognition happens in human social systems – complexity theory offers this missing piece”. Complexity theory and project management Complexity theory is also being used to better understand new ways of doing project management, as traditional models have been found lacking to current challenges. This approaches advocates forming a "culture of trust" that "welcomes outsiders, embraces new ideas, and promotes cooperation." Recommendations for managers Complexity Theory implies approaches that focus on flatter, more flexible organizations, rather than top-down, command-and-control styles of management. Practical examples A typical example for an organization behaving as CAS is Wikipedia, which is collaborated and managed by a loosely organized management structure that is composed of a complex mix of human–computer interactions. By managing behavior, and not only content, Wikipedia uses simple rules to produce a complex, evolving knowledge base that has largely replaced older sources in popular use. Other examples include: the complex global macroeconomic network within a country or group of countries; stock market and complex web of cross-border holding companies; manufacturing businesses; and any human social group-based endeavor in a particular ideology and social system such as political parties, communities, geopolitical organizations, and terrorist networks of both hierarchical and leaderless nature. This new macro level state may create difficulty for an observer in explaining and describing the collective behavior in terms of its constituent parts, as a result of the complex dynamic networks of interactions, outlined earlier. See also Complexity theory (disambiguation) Cynefin Centre for Organisational Complexity The Santa Fe Institute Global brain Self-organization The New England Complex Systems Institute Ralph Douglas Stacey Complex Adaptive Leadership References Further reading Axelrod, R. A., & Cohen, M. D., 2000. Harnessing Complexity: Organizational Implications of a Scientific Frontier. New York: The Free Press Yaneer Bar-Yam (2005). Making Things Work: Solving Complex Problems in a Complex World. Cambridge, MA: Knowledge Press Beautement, P. & Broenner, C. 2010. Complexity Demystified: A Guide for Practitioners. Originally published in Axminster: Triarchy Press Biermann, F. & Kim, R.E. (Eds). 2020. Architectures of Earth System Governance: Institutional Complexity and Structural Transformation. Cambridge University Press. Brown, S. L., & Eisenhardt, K. M. 1997. The Art of Continuous Change: Linking Complexity Theory and Time-paced Evolution in Relentlessly Shifting Organizations. Administrative Science Quarterly, 42: 1–34 Burns, S., & Stalker, G. M. 1961. The Management of Innovation. London: Tavistock Publications Davis, J. P., Eisenhardt, K. M., & Bingham, C. B. 2009. Optimal Structure, Market Dynamism, and the Strategy of Simple Rules. Administrative Science Quarterly, 54: 413–452 De Toni, A.F., Comello, L., 2010. Journey into Complexity. Udine: Lulu Publisher Fonseca, J. (2001). Complexity and Innovation in Organizations. London: Routledge Douma, S. & H. Schreuder, Economic Approaches to Organizations, 6th edition, Harlow: Pearson. Gell-Mann, M. 1994. The Quark and the Jaguar: Adventures in the Simple and the Complex. New York: WH Freeman Kauffman, S. 1993. The Origins of Order. New York, NY: Oxford University Press. Levinthal, D. 1997. Adaptation on Rugged Landscapes. Management Science, 43: 934–950 Liang, T.Y. 2016. Complexity-Intelligence Strategy: A New Paradigmatic Shift. Singapore: World Scientific Publishing. March, J. G. 1991. Exploration and Exploitation in Organizational Learning. Organization Science, 2(1): 71–87 McKelvey, B. 1999. Avoiding Complexity Catastrophe in Coevolutionary Pockets: Strategies for Rugged Landscapes. Organization Science, 10(3): 249–321 McMillan, E. 2004 Complexity, Organizations and Change. Routledge. Hardback. Paperback Moffat, James. 2003. Complexity Theory and Network Centric Warfare. Obolensky N. 2010 Complex Adaptive Leadership - Embracing Paradox and Uncertainty Perrow, C. Complex Organizations: A Critical Essay Scott, Forseman & Co., Glenville, Illinois Rivkin, J., W. 2000. Imitation of Complex Strategies. Management Science, 46(6): 824–844 Rivkin, J. and Siggelkow, N. 2003. Balancing Search and Stability: Interdependencies Among Elements of Organizational Design. Management Science, 49, pp. 290–311 Rudolph, J., & Repenning, N. 2002. Disaster Dynamics: Understanding the Role of Quantity in Organizational Collapse. Administrative Science Quarterly, 47: 1–30 Schilling, M. A. 2000. Toward a General Modular Systems Theory and its Applicability to Interfirm Product Modularity. Academy of Management Review, 25(2): 312–334 Siggelkow, S. 2002. Evolution toward Fit. Administrative Science Quarterly, 47, pp. 125–159 Simon, H. 1996 (1969; 1981) The Sciences of the Artificial (3rd Edition) MIT Press Smith, Edward. 2006. Complexity, Networking, and Effects Based Approaches to Operations] by Edward Snowden, D.J. Boone, M. 2007. "A Leader's Framework for Decision Making". Harvard Business Review, November 2007, pp. 69–76. Weick, K. E. 1976. Educational Organizations as loosely coupled systems. Administrative Science Quarterly, 21(1): 1–19 Systems science Business economics Technology strategy Complex systems theory Cybernetics
0.763663
0.986706
0.753511